🌐
Videos Blog About Series πŸ—ΊοΈ
❓
πŸ”‘
/index:

Problems with CPAN πŸ”— 1724338205  

🏷️ perl 🏷️ cpan

Those of you who don't lurk the various perl5 groups on social media or P5P may be unaware that there are a number of problems with CPAN, largely with regard to how namespaces are doled out. Essentially the first distribution to claim it gets to squat there forever, whether you like it or not. If the maintainer does not wish for new patches to ever be added, such as is the case with DBIX::Class, longstanding custom prohibits this.

Can the state of affairs be changed? Is this compatible with the various open source licenses and terms of use of PAUSE? The core of it comes down to this passage in the PAUSE rules:

You may only upload files for which you have a right to distribute. This generally means either: (a) You created them, so own the copyright; or (b) Someone else created them, and shared them under a license that gives you the right to distribute them.
Nearly everything on CPAN has a license for which forking is entirely compatible. Similarly, nearly all of them permit patching. As such a variety of solutions have been proposed.
  • An opt-in 'patched' version of modules available on CPAN, to account for gone/unresponsive maintainers. Implement using cpan distroprefs.
  • Make it clear that ownership of a namespace remains in control of the PAUSE admins, rather than the code authors. This would cut the gordian knot of things like DBIX::Class.
  • More radical changes, such as "aping our betters" over at NPM and adding a number of their nifty features (security information, private packages, npm fund, etc)
I personally favor the latter over the long term, and doing what we can now. The perl build toolchain is overdue for a revamp, and abusing things like github releases would massively cut down on the storage requirements for PAUSE. While the Github 'Modules' registry would be ideal, it does not support our model, so releases it would have to be.

How to get from here to there?

I suppose the idea would be to implement NPM's featureset and call it PNPM (Perl-flavored NPM). You could have it scrape the CPAN, see which modules have primary repos on github, and if they have (non-testing) releases with a higher version number, to prefer that version of the package. That way it would be backwards compatible and give you a path to eventually move entirely off of PAUSE, and into a new model.

That said, it sounds like a lot of work. NPM itself is a business, which is why they have the model of taxing private packages for the benefit of the community at large.

One possible way forward (which would be less work for us) would be to ask if the npm crew wants to expand their business to packaging more than just node code; I imagine most of their infrastructure could be made generic and get us the featureset we want. I'd be shocked if such a thing isn't already on their roadmap, and github's.

They likely wouldn't be onboard if they didn't see a viable route to profit from private perl package distribution. Most established perl business already have their distribution channels long-established, and wouldn't see a compelling reason to horizontally dis-integrate this part of their business unless it would be significantly cheaper.

Leveraging github would likely be key to that, as they have the needed economy of scale, even beyond things like S3/R2 people are already using in their distribution channels. NPM likely has enough juice to get new package formats added to github packages, I suspect we don't.

On the other hand there might be room in the market to exploit that gap between what github supports as packages and what you can do with good ol' fashioned releases. E.G. make an actually universal package management tool that knows how to talk to each package manager and therefore inject means of distributing (taxed) private packages for mutual benefit with a % kicked back to the relevant language foundation. Might be worth researching and pitching to VCs.

Back to reality

In the meantime, it's fairly obvious the PAUSE admins could fix the main problems with a little time and will. That's probably the best we'll get.


Perl: Dead and loving it πŸ”— 1724260931  

🏷️ perl

Internet people love to spray their feelings about everything under the sun at every passerby. Perl, being a programming language, is no exception. At the end of the day, all the sound and fury signifies nothing. While I've largely laid out my perspective on this subject here, I suspect it's not quite the engagement bait people crave. Here's some red meat.

The reality is that multi-bilion dollar businesses have been built on infinitely worse stacks than what modern perl brings to the table. What's your excuse, loser? Quit whining and build.

Sturgeon's Law applies to everything, programs, languages and their authors included. 90% of the time you will be driving your stack like you stole it until the wheels fall off, swatting flies with elephant guns, yak shaving and putting vault doors on crack houses. What matters is that you focus the 10% of time that you are "on" where it counts for your business.

You only have a limited amount of time on this earth, and much less where you are in the zone. It will almost never be a good use of that time learning the umpteenth new programming language beyond the bare minimum to get what you want done. So don't do it if you can avoid it.

There are many other areas in life where we engage in rational ignorance; your trade will be no exception. Learning things before you use them is a waste of time, because you will forget most of it. I've forgotten more math than most people ever learn.

Having written in more than 20 programming languages now, the feeling I have about all of them is the same.

  • They all have footguns, and they're usually the useful part of the language.
  • Some aspect of the build toolchain is so bad you nearly have an aneurysm.
  • I retreat into writing SQL to escape the pain as much as possible. It's the actually superior/universal programming language, sorry to burst your bubble
  • FFI and tools for building microservices are good enough you can use basically any other library in any other language if you need to.
  • HTML/JS are the only user interface you should ever use aside from a TTY, everything else is worse, and you'll need it eventually anyways.
I reject the entire premise that lack of interest in a programming language ought to matter to anyone who isn't fishing for engagement on social media. Most people have no interest whatsoever in the so-last-millenium Newton-Rhapson method, and yet it's a core part of this LLM craze. Useful technology has a way of not dying. What is useful about perl will survive, and so will your programming career if you stick along for the ride.

Remember the craftsman's motto: Maintain > Repair > Replace. Your time would be better spent not whining on forums, and instead writing more documentation and unit tests. If you spend your free time on that stuff, I would advise you to do what the kids say, and "Touch Grass". Otherwise how are you gonna tell the kids to get off your damned lawn?

Why all companies eventually decide to switch to the new hotness

You can show management the repeated case studies that:

  • switching programming languages is always such a time-consuming disaster they lose significant market share (forgetting features, falling behind competitors)
  • Statistically significant differences in project failure rate at firms is independent of the programming languages used
it bounces off them for two reasons:
  1. Corn-Pone opinions. Naturally managers are predisposed to bigger projects & responsibilities, as that's the path onward and upwards.
  2. "Senior" developers who know little about the existing tech stack, and rationally prefer to work on what they are familiar with.
The discerning among you are probably thinking "Aha! What if you have enough TOP MEN that know what's up?" This is in fact the core of the actual problem at the firm: turnover. They wouldn't even consider switching if those folks were still there.

They could vertically integrate a pipeline to train new employees to extend their lease on life, but that's quite unfashionable these days. In general that consists of:

  • University Partnerships
  • Robust paid modding scenes
  • The Tech -> QA -> Dev pipeline
The first is a pure cost center, the second also births competitors, and the third relies on hiring overqualified people as techs/QAs. Not exactly a fun sell to shareholders who want profit now, a C-level that has levered the firm to the point there is no wiggle room, and managers who would rather "milk the plant" and get a promotion than be a hero and get run over.

This should come as no shock. The immediate costs are why most firms eschew vertical integration. However, an ounce of prevention is worth a pound of cure. Some things are too important to leave to chance, and unfortunately this is one of them.

Ultimately, the organization, like all others before it, at some point succumbs to either age or the usual corporate pathologies which result in bouts of extreme turnover. This is the curse of all mature programming languages and organizations. Man and his works are mortal; we all pay the wages of our sins.

Conclusion

This "Ain't your grandpappy's perl", and it can't be. It's only as good as we who use perl are. Strap in, you are playing calvinball. Regardless of which language you choose, whether you like it or not, you are stuck in this game. It's entirely your choice whether it is fun and productive, or it is a grave.


Net::OpenSSH::More πŸ”— 1723163198  

🏷️ perl 🏷️ ssh 🏷️ cpan-module

We have released to the CPAN a package that implements some of the parts of Net::OpenSSH that were left as "an exercise to the reader." This is based on Andy and my experiences over at cPanel's QA department among other things. It differs in important ways from what was used in the QA department there (they also have moved on to a less bespoke testing framework nowadays):

  • It is a "true" subclass, and any methods that are in particular specific to Linux can be implemented similarly as a subclass of Net::OpenSSH::More.
  • It *automatically* reconnects upon dropped connections, etc. and in general is just a lot nicer to use if you wanted an "easy mode" SSH accessor lib for executing commands.
  • Execution can be made *significantly* faster via a stable implementation of using expect to manage a persistent SSH connection.
  • ...and more! See the POD for full details.
Many thanks of course go out to current and former colleagues (and cPanel/WebPros specifically), as despite their perl QA SSH libraries not being an actual product cPanel ever used anywhere publicly, methods like these eased remotely manipulating hosts we wished to run black box testing upon so much that even authors with minimal training in perl could successfully use to write tests that executed against a remote host. This is of value for organizations that have a need to run tests in an isolated environment, or for anyone wishing to build their own CI suite or SSH based orchestrators in perl without having to discover a lot of the hard "lessons learned" that almost anyone who has had to use Net::OpenSSH extensively will eventually learn.

Of course, the most profuse of thanks go out to Salvador, who provided the excellent module this extends.

Eventually we plan to extend this package to do even more (hehe), but for now figured this was good enough to release, as it has what's probably the most useful bits already.


Using libvirt with terraform on ubuntu πŸ”— 1723038220  

🏷️ terraform 🏷️ ubuntu 🏷️ libvirt

In short, do what is suggested here.

For the long version, this is a problem because terraform absolutely insists on total hamfisted control of its resources, including libvirt pools. This means that it must create a new one which is necessarily outside of the realm of it's apparmor rules. As such you have to turn that stuff off in the libvirt config file.

Important stuff now that I'm using it to deploy resources.

Other useful things to remember

  • hold escape after reboot to get a boot menu to go single-user when using virtual console via virt-manager, etc.
  • virsh net dhcp leases default - get the 'local' IPs of the VMs so spawned
  • Cloud-init logs live in /var/log/cloud-init*.log
Longer term I should build a configuration script for the HV to properly setup SELinux contexts, but hey.

How I learned to love postfix for in perl πŸ”— 1722982036  

🏷️ perl
Suppose you do a common thing like a mapping into a hash but decide to filter the input first:
shit.pl
my %hash = map {
    "here" => $_
} grep {
    -d $_
} qw{a b c d .};
This claims there is a syntax eror on line 6 where the grep starts. This is a clear violation of the principle of least-astonishment as both the map and grep work by themselves when not chained. We can fix this by assigning $_ like so:
fixed.pl
my %hash = map {
    my $subj = $_;
    "here" => $subj
} grep {
    my $subj = $_;
    -d $subj
} qw{a b c d .};

Now we get what we expect, which is no syntax error. This offends the inveterate golfer in me, but it is in many critic rules for a reason. In particular when nested lexical scope inside of the map/grep body is a problem, which is not the case here.

But never fear, there is a superior construct to map in all cases...postfix for!

oxyclean.pl
my %hash = "here" => $_ for grep { -d $_ } qw{a b c .};
No syntax errors and it's a oneliner. It's also faster due to not assigning a lexical scope.

On General Aviation πŸ”— 1722897713  

🏷️ aviation

I wanted to be a pilot as a young man. While I did learn to fly, I ended up a mathematician, programmer and tester. Even then I am incredibly frustrated by corporate pathologies which prevent progress and meaningful improvement at the firms and industries I interact with. But it's nothing compared to the dead hand which smothers "General Aviation", which is how you learn to fly if you aren't one of the pampered princes of the USAF. This is not to say the USAF isn't dysfunctional (it is), but that GA is how I learned to fly, and frankly how most people would in a properly functioning situation.

I usually don't talk about it because it rarely comes up. Look up in the sky and 99 times out of 100 you'll see nothing unless you live next to an international airport. Sometimes people complain about "crowded" airspace and I want some of what they're smoking. You could easily fit 1000x more active aircraft in the sky safely.

Imagine my surprise when I see Y Combinator is taking their turn in the barrel. I wonder what has them so hopeful? If I were to hazard a guess, it comes from the qualification at the end of their post where they mention "a plethora of other problems that make flying cumbersome". Here are my thoughts on the ones they mentioned.

  • weight and balance worksheets - A sensor package which could detect excessive loading or too aft a CG is possible.
  • complicated route planning - When I last flew 20 years ago, filing a flight plan was a phone call and you needed to understand the jargon to make it happen. I suspect it's no different today. A computerized improvement is entirely possible.
  • talking to ATC - Comms tech in aviation remains "get them on the radio" even in cases where route adjustments could be pushed as data via a sideband. Ideally getting people on the main freq is the exception rather than the rule.
  • lengthy preflight checks - Could largely be automated by redundant sensors. Ultimately like cars it could be an "idiot light", which I'm sure instantaneously raises blood pressure in most pilots.
  • a fractured system of FBOs - Who do I call to file my flight plan? It's (not) surprising this hasn't yet been solved; there's a cottage industry of middlemen here. My kingdom for a TXT record.
  • difficult access to instruction - In town, good luck learning to fly. Half hour of procedural compliance for a 10 minute flight, all on the clock. Not to mention dealing with gridlock traffic there and back. There are nowhere near the number of airports needed for flight to be remotely accessible to the common man.

They go on to state "the list goes on. We are working on all of these too". Good luck, they'll need it. The FAA is legendarily hidebound and triply so when it comes to GA. Everyone before them who tried was gleefully beheaded by the federal crab bucket.

All this stuff is pretty obvious to anyone who flies and understands tech, but this regulatory environment ruthlessly selects against people who understand tech. Why would you want to waste your life working on airframes and powerplants with no meaningful updates in more than a half-century? Or beat your head against the brick wall of the approvals process to introduce engine tech that was old hat in cars 50 years ago?

It's not shocking the FAA and GA in general is this way. Anyone who can improve the situation quickly figures out this is a club they ain't in, and never will be. Everyone dumb/stubborn enough to remain simply confirms the biases the regulators have about folks in "indian country". Once a brain drain starts, it takes concerted effort to stop. The feds do not care at all about that problem and likely never will.

This is for two reasons. First, the CAB (predecessor of FAA) strangled the aviation industry on purpose in service of TWA in particular, and that legacy continues to poison the well. The aviation industry has exactly the kind of "revolving door" criticized in many other regulatory and federal contractor situations. This is why they don't devote a single thought to things like "which FBO should I call". Like with any other regulator the only answer to all questions is "read their minds" (have one of 'em on the payroll).

Second, there is no "General aviation" lobby thanks to this century-long suppression, so nobody in politics cares about fixing this. Like with the sorry state of rocketry pre-Spacex, this will require a truly extreme amount of work, no small amount of luck, and downright chicanery to cut the gordian knot. I love that the founders of this firm are Spacex alums, perhaps they have what it takes.


How to fix Wedged BIND due to master-master replication πŸ”— 1722479687  

🏷️ dns 🏷️ bind

rm /var/named/_default.nzd
In short, you have to nuke the zone database with the remote zone that says "HEY IM DA MASTA" when you have the local zone going "NUH UH, ME MASTER". This means you'll have to manually rebuild all the remote zones, tough shit. There is no other solution, as there's no safe way to actually putz with the binary version of _default.nzf. Seasoned BIND hands would tell you "well, don't do that in the first place". I would say, yes I agree. Don't use BIND in the first place.

25 most recent posts older than 1722479687
Size:
Jump to:
POTZREBIE
© 2020-2023 Troglodyne LLC