Professional Services
Custom Software
Managed Hosting
System Administration
See my CV here.
Send inquiries here.
Open Source:
tCMS
trog-provisioner
Playwright for Perl
Selenium::Client
Audit::Log
rprove
Net::Openssh::More
cPanel & WHM Plugins:
Better Postgres for cPanel
cPanel iContact Plugins
You break out of your loop when you stop getting meaningful
observations. Many organizations have successfully adopted
this (see OPCDA). The
whole point here is that you accumulate less bad designs
lurking in your code, as you can refine constraints quickly enough
to not over-invest in any particular solution.
For those of you not familiar with me, I have a decade of
experience automating QA processes and testing in general.
This means that the vast majority of my selling has been of two
kinds:
That said, I also wore "all the hats" in my startup days at
hailstrike, and had to talk a customer down from bringing their
shotgun to our office.
I handled that one reasonably well, as the week beforehand I'd
read Carl Sewell's Customers
for Life and Harry Browne's Secret
of selling anything.
The problem was that one of the cronies of our conman CEO was a
sales cretin there and promised the customer a feature that didn't
exist and didn't give us a heads up.
It took me a bit to calm him down and assure him he was talking to
a person that could actually help him, but after that I found out
what motivated him and devised a much simpler way to get him what
he wanted.
A quick code change, a deploy and call back later to walk him
through a few things to do on his end to wrangle data in Excel and
we had a happy camper.
He had wanted a way to bulk import a number of addresses into our
systems and get a list of hailstorms which likely impacted the
address in question, and a link into our app which would pull the
storm map view immediately (that they could then do a 1-click
report generate for homeowners).
We had a straightforward way of doing this for one address at a
time, but I had recently completed optimizations that made it
feasible to do many as part of our project to generate reports up
to two years back for any address.
Our application was API driven and already had a means to process
batched requests, so it was a simple matter of building an excel
macro talking to our servers which he could plug his auth
credentials into.
I built this that afternoon and sent it his way. This
started a good email chain where we made it an official feature of
the application.
It took a bit longer to build this natively into our application,
but before the week was up I'd plumbed the same API calls up to
our UI and this feature was widely available to our customers.
I was also able to give a stern talking to our sales staff (and
gave them copies of C4L and SSS) which kept this from happening
going forward, but the company ultimately failed thanks to
aforementioned conman CEO looting the place.
After that experience I went back to being a salaryman over at
cPanel. There I focused mostly on selling productivity tools
internally until I transitioned into a development role.
I'd previously worked on a system we called "QAPortal" which was
essentially a testing focused virtual machine orchestration
service based on KVM. Most of the orchestration
services we take for granted today were in their infancy at
that time and just not stable or reliable enough to do the
job. Commercial options like CloudFormation or VSphere were
also quite young and expensive, so we got things done using perl,
libvirt and a webapp for a reasonable cost. It also had some
rudimentary test management features bolted on.
That said, it had serious shortcomings, and the system
essentially was unchanged for the 2 year hiatus I had over at
hailstrike as all the developers moved on to something else after
the sponsoring manager got axed due to his propensity to have
shouting matches with his peers.
I was quickly tasked with coming up with a replacement. The
department evaluated test management systems and eventually
settled on TestRail, which I promptly wrote the perl API client
for and put it on CPAN.
The hardware and virtual machine orchestration was replaced with
an openstack cluster, which I wrote an (internal) API library for.
I then extended the test runner `prove` to talk to and multiplex
it's argument list over the various machines we needed to
orchestrate and report results to our test management system.
All said, I replaced the old system within about 6 months.
If it were done today, it would have taken even less time thanks
to the advances in container orchestration which have happened in
the intervening time. The wide embrace of SOAs
has made life a lot better.
Now the team had the means to execute tests massively in parallel
across our needed configurations, but not every team member was
technical enough to manage this all straightforwardly from the
command line. They had become used to the old interface, so
in a couple of weekends I built some PHP scripts to wrap our apps
as an API service and threw up a jQuery frontend to monitor test
execution, manage VMs and handle a few other things the old system
also accomplished.
Feedback was a lot easier than with external customers, as my
fellow QAs were not shy about logging bugs and feature requests.
I suspect this is a lot of the reason why companies carefully cultivate alpha and beta testers from their early adopter group of rabid fans. Getting people in the "testing mode" is a careful art which I had to learn administering exploratory test sessions back at TI, and not to be discarded carelessly. That is essentially the core of the issue when it comes to getting valid reports back from customers. You have to do Carl Sewell's trick of asking "what could have worked better, what was annoying...", as those are the sort of user feedback that you want rather than flat-out bugs. Anything which breaks the customers' immersion in the product must be stamped out -- you always have to remember you are here to help the user, not irritate them.
Rewarding these users with status, swag and early access was the
most reliable way to weed out time-wasters; you only want people
willing to emotionally invest, and that means rewards have to
encourage deeper integration with the product and the
business. It also doesn't hurt that it's a lot cheaper and
easier to justify as expenses than bribes.
Measuring adoption of software and productivity ideas in general
can be tricky unless you have a way to either knock on the door or
phone home. Regardless of the approach taken, you also have to
track it going forwards, but thankfully software makes that part
easy nowadays.
Sometimes you use A/B tests and other standard conversion metrics,
as I used extensively back at HailStrike. I may have tested
as much copy as I did software! Truly the job is just
writing and selling when you get down to it.
In the case of inter-organization projects most of the time it's
literally knocking on the door and talking to someone. At
some level people are going to "buy" what you are doing, even if
it's just giving advice. This is nature's way of telling you
"do more of this, and less of the rest".
I can say with confidence that the best tool for the job when it
comes to storing this data is a search engine, as you eventually
want to look for patterns in "what worked and didn't".
Search engines and Key-Value stores give you more flexibility in
what IR
algorithm best matches the needs of the moment. I use
this trick with test data as well; all test management systems use
databases which tend to make building reports cumbersome.
Rather than flippantly dismiss the original question, I would
like to revisit the problem. While it is obvious that I will
probably gain more over the long term by sacrificing my desire to
do something fun instead of writing this article, one must also
take into consideration the law of diminishing
marginal utility and the Paradox of Value. Thinking
long term means nothing when one is insolvent or dead without
heirs tomorrow. There will always be an infinite number of
possible ends for which I sacrifice my finite means. As an
optimization problem, it is NP hard. The best we can do is
to use the Kelly
Criterion to distribute our time and other assets wisely
among the opportunities we best understand the risks about.
Building an online reputation is quite expensive and time
consuming, but is beginning to pay off. It doesn't hurt that
I'm pursuing multiple aims simultaneously (building a MicroISV
product, chasing contracts) with everything I write these
days. That said it cannot be denied that hanging out your
shingle is tantamount to a financial suicide mission without
multiple years of runway. Had I not spent my entire adult
life toiling, living below my means and not taking debts, none of
this would be possible. In many ways it's a lot like going
back to college, but the hard knocks I'm getting these days have
made me learn a whole lot more than a barrel full of professors.
For those who insist on the technical answer to this question, I
would direct you to observe the design of Selenium::Client
versus that of Selenium::Remote::Driver.
This is pretty much my signature case
for why picking a good design from the beginning and putting in
the initial effort to think is worth it. My go-to approach
with most big balls
of mud is to stop the bleeding with modular design.
Building standalone plugins that can ship by themselves was a very
effective approach at cPanel, and works very well when dealing
with Bad
and Right systems. What is a lot harder to deal with
is "Good and Wrong" systems, usually the result of creationist
production. When dealing with a program that puts users and
developers into Procrustes' bed
rather than conforming to their needs you usually have to start
back from 0. Ironically most such projects are the result of
the misguided decision to "rewrite it, but correctly this time".
Given cPanel at the time was a huge monorepo sort of personifying
"bad design, good execution", many "lets rewrite it, but right
this time" projects happened and failed, mostly due to having
forgotten the reasons it was written the way it had been in the
first place. New versions of user interfaces failed to
delight users thanks to removing features people didn't know were
used extensively or making things more difficult for users in the
name of "cleaner" and "industry standard" design. A lot of
pain can be brought to a firm when applying development standards
begins to override pleasing the customer. The necessity of
doing just that eventually resulted in breaking the monolith to
some extent, as building parallel distribution mechanisms was the
only means to escape "standardization" efforts which hindered
satisfying customer needs in a timely manner.
This is because attempting to standardize across a monorepo
inevitably means you can't find the "always right" one-size
fits-all solution and instead are fitting people into the iron
bed. The solution of course is better organizational
design rather than program design, namely to shatter the
monolith. This is also valuable at a certain firm scale
(dunbar's number again), as nobody can fit it all into their head
without resorting to public interfaces, SOA and so forth.
Reorientation to this approach is the textbook
example of short-term pain that brings long-term benefit,
and I've leveraged it multiple times to great effect in my career.