Professional Services
Custom Software
Managed Hosting
System Administration
See my CV here.
Send inquiries here.
Open Source:
tCMS
trog-provisioner
Playwright for Perl
Selenium::Client
Audit::Log
rprove
Net::Openssh::More
cPanel & WHM Plugins:
Better Postgres for cPanel
cPanel iContact Plugins
The consistent theme I've been driving at with tCMS development is to transform as much of the program out of code into data. The last thing I've done in this vein was to create parent-child relationships between posts (series), and to allow posts to embed other posts within themselves. The next thing I'm interested in doing is to move the entire page structure into data as well. Recently working with javascript component-based frameworks has given me the core inspiration behind what I ought to do.
Any given page can be seen as little more than a concatenation of components in a particular order. Components themselves can be seen in the same way, simplifying rendering them to be a matter of recursive descent to build an iterator you feed to the renderer. How do I implement this with the current system?
Every post needs to support an array of components. This will necessitate a re-thinking of how the post interface itself works. I should probably have some "preview" mechanism to show an idea how the post should work after you frankenstein it together.
This will enable me to do the most significant performance improvement I can do (static renders) incredibly easily. As a page render will be little more than a SELECT CONCAT statement over a table of pre-rendered component instances for the data. To make updates cheap, we need but check the relevant post timestamps to see if anything in the recursive descent needs a re-render.
As of this writing, a render of the most complicated page of any tCMS install is taking 21ms. This should bring that time down to 2-3ms. It will also enable me to implement the feature which will turn tCMS into a best-of-breed content publishing framework. Which is to automatically syndicate each page we render to multiple CDNs and transparently redirect to them in a load-balancing fashion.
From there I see little more that needs to be done other than improving the posting interface and adding userland features. I still want all of that, but believe technical excellence comes first.
When considering a switch from traditional web hosting to something like S3 (or ipfs) plus web workers or equivalent stateless compute resources (GKP, Lambda, etc) you need to re-think how you deploy your applications. Many of the complex backends that you are used to are simply not feasible to have on the live web with this model. That said, it's not hard to write CGIs in a stateless fashion, and many only keep their state client side or in a database, which is a model that works just fine here.
That said, most of your backend probably doesn't even need to be public. It could easily be behind a firewall and just push updates to your staging and production buckets and workers. In fact, I'm considering this model for tCMS right now! There's no good reason why I need the login functionality public, at least for post editors -- all of them could run a local instance and push out updates supposing I had an rqlite backend data storage sitting in a bucket as a read only node, with the editors behind firewalls doing the RAFT thing.
From there the question is whether to do static generation or load sqlite with sql.js and do everything client-side (or a combination of both). Static versus dynamic web pages is well trod ground at this point so do what feels best for your application. Personally I will probably go with static generation as I prefer to run as little as possible on the client machine to get the best user experience.
The big drawback to using IAAS is (historically) higher TCO and a lack of good abstractions making it easy to self-host under such models. Things like OpenStack make self-hosting possible, but prohibitively expensive and it feels like swatting flies with an elephant gun for most webmasters.
Even containerization and it's orchestration tools require a lot of extra thought and code versus the traditional LAMP approach. What's really needed to swim in both pools easily are some abstractions to make it easy for you to:
For S3, s3proxy is exactly what we are looking for. As to an imitation of web workers (or other "stateless" compute resources), that would be far more complex given there is no standardization across vendors.
About halfway into my year of doing this solo entrepeneur thing, I realized a lot of my work on tCMS was not done out of a desire to outdo wordpress, ghost or any of the other CMSes which are a part of this cambrian explosion of software most of us have lived through.
Instead, it was actually done for the same reason carpenters build their own house. By god, I'm gonna do it the way I want it for once! What you get is indeed quite satisfying. Though when you zoom out and think of the long term perspective, does it actually mean as much as I feel it does? After all, generations of my ancestors built their own homes and barns. Those now living in them (if they weren't razed) now have no idea what went into it or why it was built that way.
It will be the same with software and the brands and businesses built around them. Like with houses, the only ones that will remain standing will largely be a function of what particular families, towns, firms and industries managed to stick around. As such the "0 code" grifters, for all their embarrassing obviousness are essentially right when they focus almost exclusively on building their customer pipelines.
Now that I'm out in the world of general contracting, I actually see this everywhere. The biggest and most successful businesses tend to run lean and hard on their creaking and ancient facilities be they real or virtual. Even obsolete software, hardware and real estate work just fine, and usually with great margins now that they've long outlived their depreciation curve.
What I'm trying to say here is yet another reason to not get too wrapped up in your tech. So what if it's a mountain of garbage? Plenty of money to be made mining that heap! Using a dead language? Necromancy tends to have pretty high margins! Lots of people make their livings with run-down trucks and dilapidated real estate.
Which is ultimately why I'm sticking with my little CMS. Sure it's using Perl, and probably an evolutionary dead end as far as CMSes go. But it's mine, and at the end of the day you have to live like nobody else to live like nobody else. As long as it delivers where I need it to, I'm not gonna sweat about the future. Having seen several people build successful business with worse tech up close and personal, I'm confident that I can actually build a business atop this little house for my data.
I'm quite blessed to have had good advice, prudent planning, discipline and a patient business partner which has allowed me the ability to putter around until I figured out how all this works. I'm grateful for the clients I've had up to this point and their ongoing custom. I think this next year I'll be able to finally add in a software offering of my own.
I'm also quite happy with how well I've done keeping up with my open source projects. Now that it looks like I've actually got a credible hourly rate I'm beginning to wonder if setting up a charitable OSS foundation (or getting sponsorship from something existing) so I can use this pro-bono time as a writeoff will make sense in the future. I'll have to look into this, and hopefully can get a good article and video on the subject in the future.