Packaging a Lambda with Nodejs 20 and AWS SDK for JavaScript v2

Following last week's fun with packaging Lambdas that use Python, here’s a little puzzle with Node.js.

I have an outdated project that uses a Lambda originally built on v2 of the AWS SDK for JavaScript and which ran on the Node.js 12.x runtime. The minimum supported runtime now is Node.js 18.x, and starting with that version v3 is shipped instead of v2. I spent some time digging into migrating the functionality from v2 to v3, determined that the change wasn’t trivial, and decided to buy myself some time by shipping v2 of the SDK with my Lambda function. I didn’t find any guides on how to do that within an AWS CDK project, hence this post.

The root of my project isn’t the Lambda function, but the project’s entire infrastructure, of which the Lambda function and its code is but a small piece. This is the typical structure:

Read More

Devenv, python, and packaging Lambdas

One of the first things I did with Nix flakes and then Devenv.sh was standardize the Node environment for my AWS CDK deployments. A related issue that I let slide at the time was Python dependencies for when a project has Lambda functions with Python runtimes.

My comfort level with Python isn’t very high, and the variety of Python dependency tooling (pip, Poetry, pyenv, venv) leaves me bewildered on a good day. The Lambda functions that I ship aren’t complicated and don’t need many dependencies, although they need to be compatible with the given runtime. A further complexity specific to Lambda functions is that AWS expects the function and its dependencies to be zipped up together.

My preferred way to handle that pre-devenv was to use pip with the -t flag, which lets me choose the installation directory. This will install the requested package and its dependencies in that directory, which is perfect for my packaging needs. Devenv has excellent support for Python, and getting Python 3.12 is easy:

Read More

A visit to the Bridgeview

The Bridgeview Bed and Breakfast in Marysville, Pennsylvania, is a unique institution. It’s a B&B designed specifically to cater to railfans and their trailing spouses. It’s located on the Susquehanna River, about 700 feet down river from the Rockville Bridge, in the very heart of Norfolk Southern’s operations in Pennsylvania. Norfolk Southern freight trains pass frequently, and once a day in each direction Amtrak’s silvery Pennsylvanian cruises past. Just behind the B&B is the Port Road Branch and Enola Yard. Across the river is the old Pennsylvania Railroad line to Buffalo.

Everything at the B&B is organized around appreciating this location. There is WiFi, but no telephones nor televisions in the rooms. There’s a sitting room-cum-library facing the Susquehanna, packed with comfortable chairs and railroad books and magazines. There’s a display in that room, with a repeater outside, showing active movements in the vicinity. All the rooms bear names (“Juniata”, “Lehigh”, “Lackawanna”) rich in railroading tradition. It’s quiet at night, if train horns don’t bother you. On that note, the Bridgeview is surely the only hotel whose policies state that “Under no circumstances should guests trespass on railroad property, including the embankment across Main Street from the B&B.”

They advertise in Trains magazine, and we finally decided to try them out in June 2023. The operation itself is decidedly low-tech. The website itself is HTTP only, no HTTPS. You make a reservation by emailing or calling Keith, the proprietor. You mail a check for a deposit and pay the rest on-site. This all worked very smoothly but I’ll cop to some trepidation. I shouldn’t have felt any; Liz and I had a blast. The environment is incredibly chill. You sit on the big deck, watch the river, watch the trains go by. You can read a bit. There are plenty of things to do around Harrisburg. We found the Antique Marketplace of Lemoyne, Cupboard Maker Books, and Home 231.

Read More

Fifteen years on the Capitol Limited

2009 was my year of discovery on Amtrak. That January, I made my first run on the Empire Builder (as recounted in My worst best trip on the Empire Builder) from Chicago to Portland. In June, I took the Lake Shore Limited to Springfield, Massachusetts, starting a long and somewhat complex history with that train. Then, that October, I made my first trip on the Capitol Limited, traveling from Chicago to Washington and back. I’ve made 34 trips on the Capitol Limited, the most I’ve made on any American long-distance train. 23,622 miles is almost enough to circle the globe.

Today the Capitol Limited disappears from Amtrak’s timetables. This change is meant to be temporary, and is driven by two needs. One, Amtrak is conducting maintenance on the East River Tunnels over the next year and needs to reduce the number of movements between Sunnyside Yard and Pennsylvania Station in New York. Two, Amtrak has an on-going shortage of its bilevel Superliner cars. Amtrak’s solution is to take the Silver Star, one of its two New York-Florida trains, and change its northern terminus from New York to Washington. This new train is called the Floridian, and it uses the Silver Star’s single-level equipment.

I understand the decision and I can’t argue with the logic. The Superliner shortage, whatever its root causes, is real. I have concerns about the timekeeping. It’s a long way from Miami to Chicago–47 hours; 2,076 miles–and there are many opportunities for delays. On the other hand, eliminating the engine change for the Silver Star in Washington probably helps. Running single-level cars on the Capitol Limited may open the door to the long-discussed idea of through cars conveyed in Pittsburgh to the Pennsylvanian.

Read More

Migrating my galleries to GitHub and DigitalOcean

I use thumbsup to generate my galleries. That’s worth a whole other post–the short version is that the metadata is written to the images themselves, and then that metadata generates a series of static html pages for albums, tags, and categories. The resulting site is easy to host–no database and a minimum of CSS and Javascript. I’ve used Cloudfront and S3 on Amazon for several years but as I previously mentioned I’m moving my stuff off Amazon.

GitHub pages isn’t suitable on its own: I have about 13 GB of images. The generated site structure looks like this:

1
2
3
4
album/
media/
public/
index.html

Read More

Moving from S3+Cloudfront to GitHub pages

I’ve used Hexo to build this site for two years. At the time, I kept my existing S3+Cloudfront hosting stack with a few minor tweaks. Yesterday I moved the whole thing over to GitHub pages.

I had a few reasons for doing this. One, I’m doing a project at work to roll out GitHub Campus and I want to get more familiar with the architecture. Two, for various reasons I want to be less dependent on the Amazon environment. I figured moving a static site from one architecture wouldn’t be a big deal, and I was right.

Hexo has a few deployment options. When I was using an S3 bucket I just rsynced the generated output, which won’t work here. The “right way” of handling the deployment would involve pushing source files and letting a GitHub Action build the site. My local environment is a little unclean–I have some uncommitted package dependencies–so that’s not a great option. I can however use the one-command deployment option with hexo-deployer-git. Per Hexo’s documentation, you add this to _config.yml:

Read More

Looking up the Cloudformation stack for a resource

I have an AWS account with a few dozen Cloudformation stacks deployed. Among other resources are some Route 53 hosted zones, and I was pretty sure that I’d created these manually. I wanted to get them imported into a stack, and then make some changes, but first I needed to be sure that they weren’t part of a stack already. There are enough stacks in this account that manual inspection isn’t a good option.

Turns out that a good way to do this is with the AWS CLI, and I’d like to thank Nik Rahmel on Stack Overflow for the pointer. You can use the describe-stack-resources command and pass the PhysicalResourceId of the resource instead of the StackName. Here’s an example of querying a Route 53 hosted zone:

1
aws cloudformation describe-stack-resources --physical-resource-id Z99999999AAAAAAAAAAAA

Read More

How to install the free version of Advanced Custom Fields with Composer

As of October 22, 2024, ACF now supports installing ACF and ACF Pro via Composer. This blog post remains up for historical reasons.

On Saturday, October 12, Matt Mullenweg usurped the popular Advanced Custom Fields plugin on the WordPress.org plugin repository and rebranded it as “Secure Custom Fields” while retaining the original plugin. I’ve written at length about why he shouldn’t have done that and the broader consequences of that action.

This post is devoted to a far more prosaic question: how do I, as a person using Composer to bundle a WordPress deployment, install the free version of Advanced Custom Fields?

Read More

Fear, uncertainty, and doubt

Fear, uncertainty, and doubt (FUD) is as old as time. In a nutshell, it’s a disinformation tactic. You put out false information, engendering fear, in the hope that you can manipulate people toward a particular outcome favorable to you. No, I’m not talking about Donald Trump and the 2024 US election. I’m talking about Matt Mullenweg, WordPress.org, and Automattic over the last month.[1]

My introduction to FUD was in the late 1990s. I was in high school, I was playing around with Linux, and I was a regular poster on Slashdot. This was the high-water point of Microsoft trying to undermine public confidence in open source in general and Linux in particular. This was the period of the “Halloween documents” and SCO v. IBM. Rather than compete on the technical merits, Microsoft sought to create an environment where companies were afraid to adopt open source technologies because of nebulous licensing or patent concerns. The fact that these efforts ultimately failed doesn’t change how much time and money was spent combatting them. The opportunity cost was high.

Which brings me back to WordPress. I wrote yesterday (“The call is coming from inside the house”) about Matt Mullenweg’s seizure of Advanced Custom Fields on the WordPress.org plugins repository. I mentioned in passing that he also banned WP Engine from the repository. Something I didn’t mention is that logging into WordPress.org now requires you to attest that “I am not affiliated with WP Engine in any way, financially or otherwise.”

Read More

The call is coming from inside the house

I’ve written a few times about challenges with plugins on WordPress.org. Eleven years ago, as a comparative newcomer to WordPress, I wrote “Draining the swamp,” about a difficulty with an abandoned plugin. Later, I wrote “The Changelog Is A Lie” after a user usurped a plugin, hollowed it out, and replaced it with something else. I never thought we’d be in the situation that we are today, where WordPress.org has undermined the trust in the WordPress.org plugin repository. As they first said in Black Christmas, the call is coming from inside the house.

It’s beyond the scope of this post to fully review the dispute between Matt Mullenweg and WP Engine (see this article in The Verge for a good summary as of October 4). The latest development is that Mullenweg has usurped the free version of Advanced Custom Fields on the WordPress.org repository. The official announcement implies that the ACF team abandoned the plugin. This is inaccurate at best: Mullenweg banned the WP Engine developers from the WordPress.org repository, and blocked WP Engine-hosted sites from accessing it. Under the circumstances, switching the updating mechanism away from WordPress.org was the only responsible course of action available.

It’s a neat trick. You ban a developer from the repository, then announce that you’ve found a security issue, don’t let them release a patch for that issue, and then usurp the plugin because they haven’t fixed the issue. A tweet from WordPress claims justification under Guideline 18 of the plugin guidelines. It’s difficult to see how that guideline justifies this action. The developer hasn’t abandoned the plugin. The security issue wasn’t serious, and it’s been fixed. Put another way, if this is justifiable under Guideline 18, then there is no limiting principle.

Read More