🌐
Videos Blog About Series πŸ—ΊοΈ
❓
πŸ”‘

2 Major paths to take advantage of CDNs πŸ”—
1634143208  

🏷️ blog 🏷️ tcms 🏷️ www

When considering a switch from traditional web hosting to something like S3 (or ipfs) plus web workers or equivalent stateless compute resources (GKP, Lambda, etc) you need to re-think how you deploy your applications. Many of the complex backends that you are used to are simply not feasible to have on the live web with this model. That said, it's not hard to write CGIs in a stateless fashion, and many only keep their state client side or in a database, which is a model that works just fine here.

That said, most of your backend probably doesn't even need to be public. It could easily be behind a firewall and just push updates to your staging and production buckets and workers. In fact, I'm considering this model for tCMS right now! There's no good reason why I need the login functionality public, at least for post editors -- all of them could run a local instance and push out updates supposing I had an rqlite backend data storage sitting in a bucket as a read only node, with the editors behind firewalls doing the RAFT thing.

From there the question is whether to do static generation or load sqlite with sql.js and do everything client-side (or a combination of both). Static versus dynamic web pages is well trod ground at this point so do what feels best for your application. Personally I will probably go with static generation as I prefer to run as little as possible on the client machine to get the best user experience.

The big drawback to using IAAS is (historically) higher TCO and a lack of good abstractions making it easy to self-host under such models. Things like OpenStack make self-hosting possible, but prohibitively expensive and it feels like swatting flies with an elephant gun for most webmasters.

Even containerization and it's orchestration tools require a lot of extra thought and code versus the traditional LAMP approach. What's really needed to swim in both pools easily are some abstractions to make it easy for you to:

  1. Treat local directories like they are S3 buckets (and/or ipfs tokens) so you have flexibility for storage deployment
  2. Treat local httpds like they're a web worker API you deploy to, rather than vhosts
Implementing such tools would effectively allow any shared host to transition their existing infrastructure into such products, albeit without automatic scaling on their end. That said, if multiple servers are supported in a sort of federated model (again, likely mediated thru rqlite), that would largely ameliorate such concerns.

For S3, s3proxy is exactly what we are looking for. As to an imitation of web workers (or other "stateless" compute resources), that would be far more complex given there is no standardization across vendors.

25 most recent posts older than 1634143208
Size:
Jump to:
POTZREBIE
© 2020-2023 Troglodyne LLC