Minor corrections for porting-to-workers
This commit is contained in:
parent
3f35f8a7f8
commit
a3c2c311e2
|
@ -1,5 +1,5 @@
|
|||
#+title: Porting to Workers
|
||||
#+date: 2024-01-30
|
||||
#+date: 2024-01-28
|
||||
|
||||
This website is now using Cloudflare Workers!
|
||||
|
||||
|
@ -11,7 +11,9 @@ So first, post storage! With Worker size limits, I decided to go with storing po
|
|||
|
||||
After some consideration, I scrapped the idea of generating and storing the result for other Workers on the fly and looked at the Queue option instead. The plan was to pre-render the content and store it somewhere (more on that later) so I can very quickly render content in the background when something is published. When a file is pushed to R2, I can fire of a webhook that queues up the new or changed files for rendering and storing on the edge. It does seem to introduce a little more latency when it comes to publishing content, but in reality it's faster because it doesn't require me to rebuild, push, and restart a container image.
|
||||
|
||||
Where to store the rendered content stuck with me for a bit. Initially I wanted to go with KV, since it seemed it would be faster, but I found after some experimentation it was substantially slower since there's no way to easily sort the keys based on content without reading /everything/ into memory and then sorting during Worker execution. Thankfully, I could reach for a real database, and created a D1 instance to hold a single table with the posts. It being SQLite based, I can just use SQL for the queries and take advantage of much more optimised codepaths for sorting or fetching the data I actually need. While replication might be slower than KV, it's far from noticeable.
|
||||
Where to store the rendered content stuck with me for a bit. Initially I wanted to go with KV, since it seemed it would be faster, but I found after some experimentation it was substantially slower since there's no way to easily sort the keys based on content without reading /everything/ into memory and then sorting during Worker execution. Thankfully, I could reach for a real database, and created a D1 instance to hold a single table with the posts. It being SQLite based, I can just use SQL for the queries and take advantage of much more optimised codepaths for sorting or fetching the data I actually need. While D1 doesn't currently replicate, it will be a huge speed boost when it is!
|
||||
|
||||
/Note: this section originally said that D1 replicates. I was then told and disovered this is not the case at the moment. Whoops./
|
||||
|
||||
The workflow thus far is
|
||||
|
||||
|
@ -19,8 +21,6 @@ The workflow thus far is
|
|||
2. A webhook is sent to a Worker (not by R2)
|
||||
3. The worker fetches the list of files from R2 and queues them for "indexing"
|
||||
4. Workers are executed to consume the queue, rendering the files and storing them in D1
|
||||
5. D1 is replicated to the edge
|
||||
6. Workers on the edge now have access to the rendered content at the edge
|
||||
|
||||
The final piece is telling the Worker it can cache all the responses in Cloudflare's cache, and we're all set! Each response is cached for 4 hours before a Worker has to be hit to fetch the content from D1 again.
|
||||
|
||||
|
|
Loading…
Reference in a new issue