Building An Angular App For ColdFusion Using Docker Compose
In a world where containerized development exists, I feel like a failure any time I have to run a build script—such as npm install
or nvm use
—directly on my host computer. Containers should be obviating this type of workflow. But, the problem is, I'm not really that good at containerization. Recently, however, I had a mental breakthrough. I realized that my JavaScript builds didn't have to execute inside my ColdFusion container; instead, I could use a separate Node.js Docker container to perform the build and then output the distribution files to my ColdFusion container.
I'm Deploying to a Virtual Private Server (VPS)
To provide some context, this isn't a technique that I'm using in a continuous deployment (CD) scenario—I'm not building images and then using a container orchestrator. Instead, I'm building all of the code files locally and then pushing them to my personal VPS. As such, this technique doesn't have to be quite as clean as a technique you'd use in a more professional context.
Angular Inside ColdFusion
This blog runs on ColdFusion. Locally, I develop the CFML code using a CommandBox Docker container. My blog platform doesn't actually have any Angular running within it. But, I've decided to start building and hosting some Angular utilities on my blog in a place where I can easily access them.
Each utility will be located within its own sub-folder. But, this sub-folder will only contain distribution files. The source code for each utility is stored outside of the webroot within its own src
directory.
To illustrate, my Sprint Name Generator is my first Angular utility. And this is what my local file system looks like (truncated):
/bennadel.com
- src/
- utils/
- sprint-names/
- docker-compose.yaml
- Dockerfile
- wwwroot/
- utils/
- sprint-names/
Notice that the sprint-names
folder in my src
directory has its own Dockerfile
and docker-compose.yaml
files. Building the Sprint Name Generator Angular app will take place in the Node.js container provided by that docker-compose.yaml
file; and, the output of that build process will copy files into the sprint-names
folder in my wwwroot
directory.
Here's my Dockerfile
:
FROM node:20-alpine
WORKDIR /app
COPY package.json ./
RUN npm install
As you can see, it doesn't really do anything on its own. It's doesn't run any build; and, there's no CMD
operation to hold the container open. All it does is setup the proper Node.js context, copy the package.json
file, and install the npm
dependencies. The actual build instructions will be provided by the docker-compose.yaml
file.
Here's my docker-compose.yaml
file—it defines two services. The app
service is a one-time runner that builds the distribution files in production mode (via npm run build
) and exits. The app-dev
service, on the other hand, extends the app
service but holds the running container open (via npm run watch
) for ongoing local development.
version: "2.4"
services:
# This service will build the distribution files into the CFML app volume and then stop.
# For active development, use the app-dev service.
app:
build: "."
image: "bennadel.com/utils/sprint-names:latest"
command: [ "npm", "run", "build" ]
volumes:
- "./:/app"
# Mount my CFML folder to Angular's internal dist folder. This way, when Angular
# builds the code, the compiled files are synchronized to my ColdFusion application.
- "../../../wwwroot/utils/sprint-names:/app/dist"
# Allow Docker to manage the node_modules folder via a named volume so that we're
# not incurring the cost of constantly syncing thousands of files back to the host.
# The files are written directly to Docker's native file system.
- "app_node_modules:/app/node_modules"
# When actively developing the app code, this service overrides the command and enables
# watch mode. Note that this creates a different container that will need to be removed.
app-dev:
extends: "app"
command: [ "npm", "run", "watch" ]
# Using docker volumes to manage node_modules can help with performance.
volumes:
app_node_modules:
The ah-ha moment that I had is codified in the app
volumes
configuration:
"../../../wwwroot/utils/sprint-names: /app/dist"
Here, the /app/dist
folder, which is where Angular is outputting the compiled files inside the Node.js container, is actually mounted up-and-over into the wwwroot
folder. Now, when I run docker compose up app
within my src
folder, it will build the Angular code and Docker will synchronize that code back into my CFML code directory on my host computer.
Docker then subsequently synchronizes those distribution files into my CommandBox container using a separate docker-compose.yaml
file that runs my ColdFusion blog for local development. I'm not showing that compose file because it's not really relevant.
The rest of the Angular code is fairly vanilla and knows nothing about my ColdFusion blogging platform. But, there is a matter of clean-up. Every time I run this Angular build via docker compose
, I'm creating at least one Docker image, container, and named volume. Once I'm done with development, I use an npm run kill
script to try and remove the unnecessary artifacts.
Here's my package.json
file:
{
"name": "app",
"version": "0.0.0",
"scripts": {
"ng": "ng",
"build": "ng build --configuration production",
"watch": "ng build --watch --configuration development",
"kill": "docker compose down --rmi all --volumes && rm -r ./node_modules"
},
"private": true,
"dependencies": {
"@angular/animations": "18.1.3",
"@angular/common": "18.1.3",
"@angular/compiler": "18.1.3",
"@angular/core": "18.1.3",
"@angular/forms": "18.1.3",
"@angular/platform-browser": "18.1.3",
"@angular/platform-browser-dynamic": "18.1.3",
"@angular/router": "18.1.3",
"rxjs": "7.8.1",
"tslib": "2.6.3",
"zone.js": "0.14.8"
},
"devDependencies": {
"@angular-devkit/build-angular": "18.1.3",
"@angular/cli": "18.1.3",
"@angular/compiler-cli": "18.1.3",
"typescript": "5.5.4"
}
}
The npm run kill
executes (as best as I understand it—remember, I'm not an expert here):
docker compose down
- this stops and removes the running Docker containers and network.--rmi all
- this flag removes any Docker images created.--volumes
- this flag removes any named volumes created.rm -r ./node_modules
- this removes the emptynode_modules
folder that is created as a result of the named volume mount.
It seems that even after all of this, Docker is still keeping some cached layers around; but, from what I've been able to read, you can't actually remove those without running a prune
command; and, I'm pretty sure that such a command will remove more than I actually want in my local Docker For MacOS setup.
Not a One Size Fits All Approach
As I mentioned at the top, this is specifically helpful for me because I deploy code files to a VPS using FTP, not some sort of container orchestration system. I'm also not super comfortable with containers. If you have a suggested improvement here, I'm 100% open to it. But, what I don't want is to run docker compose up
on my ColdFusion app and have to sit there while any number of other JavaScript build scripts run—I like the fact that this Angular build is separate from the main ColdFusion app.
Want to use code from this post? Check out the license.
Reader Comments
Thanks for sharing your approach. Given your current setup with FTP and your preference to keep the Angular build separate from the ColdFusion app, your strategy makes sense. However, if you're open to exploring alternatives, I suggest looking into using a build pipeline or CI/CD system that can automate and streamline the deployment process without the need to fully dive into containers.
One improvement could be setting up a build script that runs locally or on a build server, automating the Angular build process before deploying it via FTP. This way, the ColdFusion app remains untouched by the build steps, but you still benefit from an automated, repeatable process. Tools like GitHub Actions or Jenkins could help with this if you're open to trying them out. It would maintain your current separation while offering more control and efficiency in the deployment process. Let me know what you think!
p.s.: I am so glad I met you in 2008 at the NY CF meeting group. Since then I have been reading your posts and they are awesome. Thank you!
@Emanuel,
Good to touch-base after so many years 😀
At work, where we do use a fully containerized solution (using Codeship for the builds, Quay for the image registry, and Kubernetes for the orchestration), the whole system builds together. But, the build also takes like 8-minutes (which feels like an absolute eternity).
The issue there is that all of the embedded JavaScript applications perform
npm install
andnpm run build
style workflows every time the code needs to be deployed. The upside to that workflow is that it's completely hands-off and repeatable. Meaning, no one has to keep track of which files have been deployed where. So, that's pretty groovy.But, going back to your suggestion about GitHub Actions, are you saying that even I could leverage GHA in my personal setup? I wasn't sure what you meant about GHA preparing the Angular build prior to FTP? Would I have to commit the build files to the repo for that to work? Right now, I'm only committing the
src
files; and then, I'm making sure that I FTP thedist
files afterwards.One issue that I've stumbled across with this approach is the
package-lock.json
file generation. Since thepackage.json
is being copied in via theDockerfile
and then thenpm install
is executed as part of the image compilation, thepackage-lock.json
file never makes its back back into the source-control of the parent app. And, when I go to mount a volume for active development, I end-up shadowing that file with my mount.The
node_modules
dance just continues to be a problem in these kinds of builds. Everything that I've seen relating to it is just work-arounds all the way down.Post A Comment — ❤️ I'd Love To Hear From You! ❤️
Post a Comment →