Professional Services
Custom Software
Managed Hosting
System Administration
See my CV here.
Send inquiries here.
Open Source:
tCMS
trog-provisioner
Playwright for Perl
Selenium::Client
Audit::Log
rprove
Net::Openssh::More
cPanel & WHM Plugins:
Better Postgres for cPanel
cPanel iContact Plugins
The core reason why selinux and other mandatory access control schemes have failed is because they do not integrate well into developer workflows. As such the only parties which implement it are large organizations with infinite resources to hurl at such a problem. Everyone else turns them off because it's far, far too much work; even for distro package maintainers.
seccomp-eBPF changed all of this. Now that you could filter syscalls in the kernel, sandboxing by nonroot processes was straightforwardly possible. Individual application authors & package maintainers can ship their own rules without stepping on anyone else's toes, and easily rule out interfering rules from other programs. When released, this resulted in a number of similar solutions like firejail, bubblewrap and others.
It seems there's a new effort in this sphere, called Landlock. The core question here is how is this any better? Why should I use this? From a capabilities point of view, it won't be more capable than the eBPF based solutions. What differentiates it as far as I can tell is:
It remains a systematic frustration with security programs that articles written about them by their own authors bury the lede or attempt to baffle with BS. That is unavoidable, unfortunately, due to being a corporate (in this case Microsoft) project.
Unfortunately, there remain a number of syscalls that are still plenty scary that don't touch kernel objects. As such seccomp/eBPF can't be abandoned in favor of this. Increased complexity for marginal gains in all but the most demanding environments. Gonna be a hard sell for most developers.
The core remaining hurdle to actually using sandboxing on any system is dynamically linked dependencies. A properly sandboxed and chrooted environment which has access to all such deps is usually so open as to be little better than doing nothing. The only way to cut that gordian knot is to ape Apple and mount / (or wherever your libdirs are) ro. Distros like SuSE's MicroOS have embraced this with enthusiasm, so I would suspect sandboxing may finally become ubiquitous. Whether distros go beyond eBPF and embrace Landlock remains to be seen. seccomp'd distro packaged apps remain rare outside of flatpak/snap, which are themselves about as beloved as skin diseases with end-users, and tremendously wasteful due to being cross-platform.
Many also rightly feel trepidation that ro system partitions are a foot in the door for "secure boot" (read: you no longer control your hardware). SystemD recently implementing support for just that, and increasing numbers of ARM based servers means linux could become cellphones faster than we might think. For good, and ill.
Those of you who don't lurk the various perl5 groups on social media or P5P may be unaware that there are a number of problems with CPAN, largely with regard to how namespaces are doled out. Essentially the first distribution to claim it gets to squat there forever, whether you like it or not. If the maintainer does not wish for new patches to ever be added, such as is the case with DBIX::Class, longstanding custom prohibits this.
Can the state of affairs be changed? Is this compatible with the various open source licenses and terms of use of PAUSE? The core of it comes down to this passage in the PAUSE rules:
You may only upload files for which you have a right to distribute. This generally means either: (a) You created them, so own the copyright; or (b) Someone else created them, and shared them under a license that gives you the right to distribute them.Nearly everything on CPAN has a license for which forking is entirely compatible. Similarly, nearly all of them permit patching. As such a variety of solutions have been proposed.
I suppose the idea would be to implement NPM's featureset and call it PNPM (Perl-flavored NPM). You could have it scrape the CPAN, see which modules have primary repos on github, and if they have (non-testing) releases with a higher version number, to prefer that version of the package. That way it would be backwards compatible and give you a path to eventually move entirely off of PAUSE, and into a new model.
That said, it sounds like a lot of work. NPM itself is a business, which is why they have the model of taxing private packages for the benefit of the community at large.
One possible way forward (which would be less work for us) would be to ask if the npm crew wants to expand their business to packaging more than just node code; I imagine most of their infrastructure could be made generic and get us the featureset we want. I'd be shocked if such a thing isn't already on their roadmap, and github's.
They likely wouldn't be onboard if they didn't see a viable route to profit from private perl package distribution. Most established perl business already have their distribution channels long-established, and wouldn't see a compelling reason to horizontally dis-integrate this part of their business unless it would be significantly cheaper.
Leveraging github would likely be key to that, as they have the needed economy of scale, even beyond things like S3/R2 people are already using in their distribution channels. NPM likely has enough juice to get new package formats added to github packages, I suspect we don't.
On the other hand there might be room in the market to exploit that gap between what github supports as packages and what you can do with good ol' fashioned releases. E.G. make an actually universal package management tool that knows how to talk to each package manager and therefore inject means of distributing (taxed) private packages for mutual benefit with a % kicked back to the relevant language foundation. Might be worth researching and pitching to VCs.
In the meantime, it's fairly obvious the PAUSE admins could fix the main problems with a little time and will. That's probably the best we'll get.
Internet people love to spray their feelings about everything under the sun at every passerby. Perl, being a programming language, is no exception. At the end of the day, all the sound and fury signifies nothing. While I've largely laid out my perspective on this subject here, I suspect it's not quite the engagement bait people crave. Here's some red meat.
The reality is that multi-bilion dollar businesses have been built on infinitely worse stacks than what modern perl brings to the table. What's your excuse, loser? Quit whining and build.
Sturgeon's Law applies to everything, programs, languages and their authors included. 90% of the time you will be driving your stack like you stole it until the wheels fall off, swatting flies with elephant guns, yak shaving and putting vault doors on crack houses. What matters is that you focus the 10% of time that you are "on" where it counts for your business.
You only have a limited amount of time on this earth, and much less where you are in the zone. It will almost never be a good use of that time learning the umpteenth new programming language beyond the bare minimum to get what you want done. So don't do it if you can avoid it.
There are many other areas in life where we engage in rational ignorance; your trade will be no exception. Learning things before you use them is a waste of time, because you will forget most of it. I've forgotten more math than most people ever learn.
Having written in more than 20 programming languages now, the feeling I have about all of them is the same.
Remember the craftsman's motto: Maintain > Repair > Replace. Your time would be better spent not whining on forums, and instead writing more documentation and unit tests. If you spend your free time on that stuff, I would advise you to do what the kids say, and "Touch Grass". Otherwise how are you gonna tell the kids to get off your damned lawn?
You can show management the repeated case studies that:
They could vertically integrate a pipeline to train new employees to extend their lease on life, but that's quite unfashionable these days. In general that consists of:
This should come as no shock. The immediate costs are why most firms eschew vertical integration. However, an ounce of prevention is worth a pound of cure. Some things are too important to leave to chance, and unfortunately this is one of them.
Ultimately, the organization, like all others before it, at some point succumbs to either age or the usual corporate pathologies which result in bouts of extreme turnover. This is the curse of all mature programming languages and organizations. Man and his works are mortal; we all pay the wages of our sins.
This "Ain't your grandpappy's perl", and it can't be. It's only as good as we who use perl are. Strap in, you are playing calvinball. Regardless of which language you choose, whether you like it or not, you are stuck in this game. It's entirely your choice whether it is fun and productive, or it is a grave.
We have released to the CPAN a package that implements some of the parts of Net::OpenSSH that were left as "an exercise to the reader." This is based on Andy and my experiences over at cPanel's QA department among other things. It differs in important ways from what was used in the QA department there (they also have moved on to a less bespoke testing framework nowadays):
Eventually we plan to extend this package to do even more (hehe), but for now figured this was good enough to release, as it has what's probably the most useful bits already.
In short, do what is suggested here.
For the long version, this is a problem because terraform absolutely insists on total hamfisted control of its resources, including libvirt pools. This means that it must create a new one which is necessarily outside of the realm of it's apparmor rules. As such you have to turn that stuff off in the libvirt config file.
Important stuff now that I'm using it to deploy resources.
shit.pl
my %hash = map {
"here" => $_
} grep {
-d $_
} qw{a b c d .};
This claims there is a syntax eror on line 6 where the grep starts.
This is a clear violation of the principle of least-astonishment as both the map and grep work by themselves when not chained.
We can fix this by assigning $_ like so:
fixed.pl
my %hash = map {
my $subj = $_;
"here" => $subj
} grep {
my $subj = $_;
-d $subj
} qw{a b c d .};
Now we get what we expect, which is no syntax error. This offends the inveterate golfer in me, but it is in many critic rules for a reason. In particular when nested lexical scope inside of the map/grep body is a problem, which is not the case here.
But never fear, there is a superior construct to map in all cases...postfix for!
oxyclean.pl
my %hash = "here" => $_ for grep { -d $_ } qw{a b c .};
No syntax errors and it's a oneliner.
It's also faster due to not assigning a lexical scope.
I usually don't talk about it because it rarely comes up. Look up in the sky and 99 times out of 100 you'll see nothing unless you live next to an international airport. Sometimes people complain about "crowded" airspace and I want some of what they're smoking. You could easily fit 1000x more active aircraft in the sky safely.
Imagine my surprise when I see Y Combinator is taking their turn in the barrel. I wonder what has them so hopeful? If I were to hazard a guess, it comes from the qualification at the end of their post where they mention "a plethora of other problems that make flying cumbersome". Here are my thoughts on the ones they mentioned.
They go on to state "the list goes on. We are working on all of these too". Good luck, they'll need it. The FAA is legendarily hidebound and triply so when it comes to GA. Everyone before them who tried was gleefully beheaded by the federal crab bucket.
All this stuff is pretty obvious to anyone who flies and understands tech, but this regulatory environment ruthlessly selects against people who understand tech. Why would you want to waste your life working on airframes and powerplants with no meaningful updates in more than a half-century? Or beat your head against the brick wall of the approvals process to introduce engine tech that was old hat in cars 50 years ago?
It's not shocking the FAA and GA in general is this way. Anyone who can improve the situation quickly figures out this is a club they ain't in, and never will be. Everyone dumb/stubborn enough to remain simply confirms the biases the regulators have about folks in "indian country". Once a brain drain starts, it takes concerted effort to stop. The feds do not care at all about that problem and likely never will.
This is for two reasons. First, the CAB (predecessor of FAA) strangled the aviation industry on purpose in service of TWA in particular, and that legacy continues to poison the well. The aviation industry has exactly the kind of "revolving door" criticized in many other regulatory and federal contractor situations. This is why they don't devote a single thought to things like "which FBO should I call". Like with any other regulator the only answer to all questions is "read their minds" (have one of 'em on the payroll).
Second, there is no "General aviation" lobby thanks to this century-long suppression, so nobody in politics cares about fixing this. Like with the sorry state of rocketry pre-Spacex, this will require a truly extreme amount of work, no small amount of luck, and downright chicanery to cut the gordian knot. I love that the founders of this firm are Spacex alums, perhaps they have what it takes.
rm /var/named/_default.nzd
In short, you have to nuke the zone database with the remote zone that says "HEY IM DA MASTA" when you have the local zone going "NUH UH, ME MASTER".
This means you'll have to manually rebuild all the remote zones, tough shit.
There is no other solution, as there's no safe way to actually putz with the binary version of _default.nzf.
Seasoned BIND hands would tell you "well, don't do that in the first place". I would say, yes I agree. Don't use BIND in the first place.
Despite being the worst attended YAPC in recent memory, 2024's show in Vegas had some of the best talks in a long while. In no particular order, the ones I remember after a week are:
This year we had another Science track in addition to the perl and raku tracks which I submitted my testing talk to. In no particular order, the ones I enjoyed were:
That being said, the next conference is very much in doubt. Due mostly to corporate sponsorship of employee attendance largely drying up, the foundation took a bath on this one. I'm sure that the waves of mutual excommunications and factionalism in the perl "community" at large hasn't helped, but most of those who put on such airs wouldn't deign to have attended in the first place. My only productive thought would be to see what it is the Japanese perl conference is doing, and ape our betters. Lots of attendance, and they're even doing a second one this year. Must be doing something right.
I got positive feedback on both of my talks. I suspect the one with the most impact will be the playwright one, as it has immediate practical impact for most in attendance. That said, I had the most productive discussions coming out of the testing talk. In particular the bit at the start where I went over the case for testing in general exposed a lot of new concepts to people. One of the retirees in the audience who raised the point that the future was "Dilbert instead of Deming" was right on the money. Most managers have never even heard of Deming or Juran, much less implemented their ideas.
Nevertheless, I suspect it was too "political" for some to call out fraud where I see it. I would point out that my particular example used (Boeing) is being prosecuted for fraud as of this writing. Nevertheless, everyone expects they'll get a slap on the wrist. While "the ideal amount of fraud in a system is nonzero" as patio11 puts it, the systematic distribution of it and near complete lack of punishment is (as mentioned in the talk) quite corrosive to public order. It has similar effects in the firm.
My lack of tolerance for short-sighted defrauding of customers and shareholders has got me fired on 3 occasions in my life, and I've fired clients over it. I no longer fear any retaliation for this, and as such was able to go into depth on why to choose quality instead. Besides, a reputation for uncompromising honesty has it's own benefits. Sometimes people want to be seen as cleaning up their act, after all.
I enjoyed very much working with LaTeX again to write the paper. I think I'll end up writing a book on testing at some point.
I should be able to get a couple of good talks ready for next year, supposing it happens. I might make it to the LPW, and definitely plan on attending the Japanese conference next year.
Back when I worked at cPanel, I implemented a feature to have customer virtualhosts automatically redirect to SSL if they had a valid cert and were configured to re-up it via LetsEncrypt (or other providers). However this came with a significant caveat -- it could not work on servers where the operator overrode our default vhost template. There is no way you can sanely inject rules into an environment where I don't even know if the template is valid. At least not in the amount of time we had to implement the project.
Why did we have this system of "templates" which were then rendered and injected into Apache's configuration file? This is because it's configuration model is ass-backwards and has no mechanism for overriding configs for specific vhosts. Its fundamental primitive is a "location" or "directory" which have a value which is either a filesystem or URI path component.
Ideally this would instead be a particular vhost name, such as "", "127.0.0.1, "foobar.test" or even multiple of them. But because it isn't we saw no benefit to using the common means of separating configs for separate things (like vhosts), the "config.d" directory. Instead we parse and generate the main config file anytime a relevant change happens. In short we had to build a configuration manager, which means that now manual edits to fix anything will always get stomped. The only way around that is to have user-editable templates that are used by the manager (which we implemented by a $template_file.local override).
Nginx recognized this, and their server primitive directive is organized around vhosts. However they did not go all the way and make it to where you could have multiple server blocks referring to the same vhost with the last one encountered, say in the config.d/ directory, taking precedence. It is not stated in the documentation, but later directives referring to the same host do the same thing as apache. As such configuration managers are still needed when dealing with nginx in a shared hosting context.
This is most unfortunate as it does not allow the classic solution to many such problems in programming to be utilized: progressive rendering pipelines. Ideally you would have a configuration model like so:
vhost * {
# Global config goes here
...
}
include "/etc/httpd/conf.d/*.conf"
# Therein we have two files, "00-clients-common.conf"
vhost "foobar.test" "baz.test" {
# Configuration common to various domains go here, overrides previously seen keys for the vhost(s)
...
}
# And also "foobar.test.conf"
vhost "foobar.test" {
# configuration specific to this vhost goes here, overrides previously seen keys for the vhost
....
}
The failure by the web server authors to adopt such a configuration model has made configuration managers necessary. Had they adopted the correct configuration model they would not be, and cPanel's "redirect this vhost to ssl" checkbox would work even with client overrides. This is yet another reason much of the web has relegated the web server to the role of "shut up and be a reverse proxy for my app".
At one point another developer at cPanel decided he hated that we "could not have nice things" in this regard and figured out a way we could have our cake and eat it too via mod_macro. However it never was prioritized and died on the vine. Anyone who works in corporate long enough has a thousand stories like this. Like tears in rain.
nginx also doesn't have an equivalent to mod_macro. One of the few places apache is in fact better. But not good enough to justify switching from "shut up and reverse proxy".
Today I submitted a minor patch for File::Slurper::Temp. Unfortunately the POD there doesn't tell you why you would want to use this module. Here's why.
It implements the 'rename-in-place' pattern for editing files. This is useful when you have multiple processes reading from a file which may be written to at any time. That roughly aligns with "any non-trivial perl application". I'm sure this module is not the only one on CPAN that implements this, but it does work out of the box with File::Slurper, which is my current favorite file reader/writer.
If you do not lock a file under these conditions, eventually a reader will consume a partially written file. For serialized data, this is the same as corruption.
Using traditional POSIX file locking with fcntl() using an RW lock comes with a number of drawbacks:
This is because rename() just changes the inode for the file. Existing readers continue reading the stale old inode happily, never encountering corrupt data. This of course means there is a window of time where stale data is used (e.g. the implicit TOCTOU implied in any action dependent on fread()). Update your cache invalidation logic accordingly, or be OK with "eventual consistency".
Be aware of one drawback here: The temporary file (by default) is in the same directory as the target as a means of avoiding EXDEV. This is the error you get from attempting to rename() across devices, as fcopy() is more appropriate there. If you are say, globbing across a directory with no filter, hilarity may ensue. You should change this to some other directory which is periodically cleaned on the same disk, or given enough time & script kills it will fill.