Cloudflare has made great progress with its Workers service since launch in 2017. The idea of hosting functions on the "edge" is quite appealing especially at the low cost that CloudFlare is offering the service. It can be a tad tricky, however, if you want to integrate with AWS services. Today I want to discuss the method that I used, explain why, and then write some demo code to get started!
There is extensive documentation about Cloudflare Workers and it is quite straightforward to get started but here are the main steps needed to get going quickly:
Create a new directory and navigate to this directory in a terminal. Then Execute the following command to create a “hello world” template.
Inside the wrangler.toml file update the ` account_id ` section with your account information
Execute wrangler publish inside the `my-app` directory.
If everything was set up properly in the Prerequisites section. A Worker should be deployed in the “Workers” dashboard on Cloudflare! Navigate to the link provided to be greeted with the familiar “hello world.”
Congratulations, you’re now a Cloudflare developer!!
Getting Started
We are going to be using a Webpack deployment, fetch to make our requests, and aws4 to handle the AWS signature version 4 signing required to talk to (most) AWS API services.
Lets talk about these things and why we need them in the form of an FAQ:
Q: “Cloudflare runs with JavaScript. Why don’t we just use the AWS-SDK?”
A: The AWS-SDK implements XHR to make the requests and this is not a supported method by the Workers. Currently the only option for making requests to outside servers is via fetch.
A: In short Webpack allows us to bundle all of our dependencies and multiple files into a single file that can be deployed on our Worker. Without Webpack we would be unable to include multiple files or dependencies in our Worker and would be forced to write everything in a single file.
A:aws4 is node module created by Github user mhart and taken directly from the readme found on the project’s Github It is:
“ A small utility to sign vanilla Node.js http(s) request options using Amazon’s AWS Signature Version 4.”
We need this because without it we would have to implement all of the AWS Signature Version 4 manually. While it is possible Mr. Hart has done a wonderful job of wrapping all the hard work up into an easy to use node module.
*While not implemented in this example, it is worth mentioning the existence of aws4fetch as well. Another module by mhart .
For this example, we will post a message to an SQS queue. More information about how to form requests for other services can be found in the AWS docs by selecting the service of interest, andnavigating to the “API Reference”.
At the time of writing this Cloudflare does not have an official boilerplate template for deploying a Webpack project. I have created a very basic starting point that can be foundHere.
Clone the repo by executing the following command:
Move into this directory and then create a Wranlger.toml file with the following commands:
cd cloudflare_webpack_template wrangler Init
Update the Wrangler.toml file with your account information
Let’s store our AWS credentials into CloudFlare ‘ Secrets ‘. This can be done by entering the following commands and providing the prompt with account credentials. If you have issues try running ‘wrangler preview’ first.
wrangler secret put env_ACCESSKEYID wrangler secret put env_SECRETACCESSKEY
Within the project that was just cloned. Make a folder `./mod`, inside this a folder `./mod/sqs` to hold a node module for our SQS logic, and then navigate here in your terminal:
mkdir mod mkdir mod/sqs cd mod/sqs/
Execute the following commands.
npm init -y npm install aws4 --save
Now we are going to write a simple function to accept some required information that will allow us to make a post to SQS!
Create an index.js file in the /mod/sqs directory, open it in a text editor, and update the file to contain the following:
var aws4 = require('aws4')
module.exports.postToQueue = async function (Region, msg, queueName, SECRETKEY, ACCESSKEYID) { let hostName = 'sqs.' + Region + '.amazonaws.com' let Path = queueName + '/?Action=SendMessage&MessageBody=' + msg var opts = { service: 'sqs', region: Region, path: Path, Host: hostName, signQuery: true } aws4.sign(opts, { secretAccessKey: SECRETKEY, accessKeyId: ACCESSKEYID }) var URL = 'https://' + opts.Host + opts.path var response = await fetch(URL); return (response) }
What is going on here?
We are Formatting the request as the SQS API will be expecting it
After formatting our request and security credentials we can then form a URL to make our request from the SQS API
The screenshot ( from the documentation ) found below might give you a better understanding of why the request is formatted the way that it is.
Then we await a response from the server, logging the response to the console, and ultimately returning the response to the calling function.
Please note that console.log will only appear in the browser’s console when running within the ‘preview’ environment. This can be launched via ‘ wrangler preview’. Cloudflare Workers have NO internal logging solutions in a production environment
Now let’s navigate back to the root directory of our project and implement our SQS module!
Execute the following command to create an SQS queue. (Noting down the URL)
aws sqs create-queue --queue-name test
Open a terminal in root folder of the project directory and execute the following command to tie in the SQS module we created earlier
npm install ./mod/sqs --save
Open up the ./index.js file in a text editor and update it to contain the following: Please note that the `region` and `queuePath` variables need to reflect the information from the queue url that noted down in the previous steps
const sqs = require('sqs');
addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)) })
/** // @param {Request} request */
async function handleRequest(request) {
if (request.method == "POST") { let msg = "hello sqs!" let region = "us-east-2" let queuePath = "/244455244329/test" let response = await sqs.postToQueue(region, msg, queuePath, env_SECRETACCESSKEY, env_ACCESSKEYID); return (response); }
return new Response('Please use a POST method to send messages to SQS', { headers: { 'content-type': 'text/plain' }, }) }
What is going on here?
When we get a request check to see if the request is of type POST.
If so then we call our ‘postToQueue’ function.
If it is not a POST request then we respond asking the user to please use a POST request.
We could omit this check and call the function on all requests but I want to demonstrate some simple routing and also some functionality within the Cloudflare testing environment.
Let’s test what we have so far!
In the root directory of the project execute the following command:
wrangler preview
This will build and publish the project to Cloudflare’s ‘Preview Service’ and open the default web browser to view the deployment.
If everything worked properly you should now see something similar to this:
Navigate to the ‘Testing’ tab located on the top and Change the request from ‘GET’ to ‘POST’
Select ‘Run Test’. If everything worked properly you should now see something similar to this:
Awesome! We are able to push a hard-coded message via Cloudflare but what if we want to be able to change the message we are sending? Or any of the other variables?>
Change:
let msg = "hello sqs!"
To
let msg = request.headers.get("msg")
Now instead of a hard-coded variable we can pass in our message via a header parameter in the request to the server!
Execute the following command:
wrangler preview
Navigate to the testing section and notice that we can include a header parameter here.
Create one named “msg”
Select “Run Test”
We can view our posted SQS messages by executing the following command:
aws sqs receive-message --queue-url
An example with the queue url created for this blog:
We are now able to send messages in a development environment!
Now move on to deploying our function and sending some information to it via a Curl request!
Execute the following command in the root directory of the project:
wrangler publish
It should return something similar to the following :
⬇️ Installing wranglerjs...> ⬇️ Installing wasm-pack...> ✨ Built successfully, built project size is 104 KiB.> ✨ Successfully published your script to https://jstst.flaretokinesis.workers.dev
Send some information via the following curl command:
By executing the same command as step 13. We can again view the information we sent.
aws sqs receive-message --queue-url
Conclusion
Cloudflare has some awesome features and an unbeatable price. But its usefulness, like most things, will depend on your use case scenario. Consider the following limitations before choosing Cloudflare as the solution to your project:
While handling a request, each Worker script is allowed to have up to six connections open simultaneously.
50ms maximum compute time for each function call.
You are limited to the use of the following ports.
Each Workers instance can consume up to 128MB of memory.
The limit for subrequests a Workers script can make is 50 per request.
Provides no access to concurrency
A Little About TriggerMesh
TriggerMesh believes enterprise developers will increasingly build applications as a mesh of cloud native functions and services from multiple cloud providers. We believe this architecture is the best way for agile businesses to deliver the effortless digital experiences customers expect and at the same time minimize infrastructure complexity.
To bring today’s enterprise applications into this future, the TriggerMesh cloud native integration platform ties together cloud computing, SaaS, and on-premises applications. We do this through an event-driven cloud service bus that connects application workflows across varied infrastructures.
Jeff is a self-taught Cloud Engineer at Triggermesh, focusing primarily on Eventing and Service Integration. You can normally find Jeff Dogfooding a service or building a magnificent new bike shed.
We use cookies (and other similar technologies) to improve your experience on our site. By using this website you agree to our Privacy & Cookies Policy