🌐
Videos Blog About Series πŸ—ΊοΈ
❓
πŸ”‘

Selling Internally, Externally and in Interviews πŸ”—
1619710468  

🏷️ video 🏷️ blog
One of the common interview questions you get is the time preference question.  I've asked it myself multiple times. It goes something like this:

  • Tell me about a time you had to make sacrifices in the short term to achieve a long term goal.
Engineering companies very much want to think of themselves as builders of great works made to stand the test of time. They frequently fall short of this as the customer generally wants "Mr. Right Now" instead of "Mr. Right". Wise organizations achieve coherence in their strategic vision by having "fulfill customer desires" itself as the long term goal. I've mentioned before that a vision which does not align with the core business model is doomed to failure, and many companies fall into this trap.

Many view the agglomeration of technical debt associated with an iterative design process to be short-term thinking which undermines the long term...but that assumes the goal is to build quality software. In reality, the goal is to build software of acceptable quality that satisfies customer needs; worse is better.  In this framework, much of what goes on at an engineering corporation can be framed as a victory rather than a death march.  The problem to solve then becomes minimizing the iteration duration of your OODA loop.

The OODA loop of a software enterprise is basically this:
  1. Observe: Sample reaction to the latest software version
  2. Orient: Refine program and development schedule constraints based on reaction
  3. Decide: Choose optimal algorithms to satisfy new and changed constraints
  4. Act: Test, Anneal and Release

You break out of your loop when you stop getting meaningful observations.  Many organizations have successfully adopted this (see OPCDA).  The whole point here is that you accumulate less bad designs lurking in your code, as you can refine constraints quickly enough to not over-invest in any particular solution.

Many times this is paired with other questions to find out how much of a self-starter, leader or entrepreneurial aspect you have:

  • How do you drive adoption for your ideas?
  • How do you measure adoption of your ideas?

They always want a concrete example from your past employment, and it needs to be your thing from start to finish. This is usually also a good opportunity to reinforce how well you embrace iterative design principles.  In fact it drives at the real reason they ask the first question.

Knowing how to drive adoption and measure it is key to the observation phase.  If your observations are flawed, it will poison and invalidate the results of all resultant phases, so you need to get it right.

There are two primary adoption strategies. All marketing is a tree search algorithm of one sort or another thanks to the way influence networks work.

Breadth first vs Depth First Marketing

You can either drive adoption of something within an organization virally (infect the sheep) or evangelically (convert the Shepard).  You can do both, but conditions usually mean you need to lean primarily towards one or the other.
The cost of reaching consumers is directly is a great deal higher, and they have a lot less to spend than businesses and bosses with budgets.  That said, the total revenue you can get from targeting retail is vastly larger, and defections from the product are less troublesome.

In general you see a hybrid model nowadays where an open source (or reduced price) component is marketed towards retail, and a paid premium version is marketed towards business.
Infection of the sheep can drive conversion of the Shepard, much the way that conversion of the Shepard can drive the flock.

When it comes to driving change within organizations, the formula is turned upon its' head.  It is actually cheaper to convert fellow drones instead of the queen, and effect a coup de main. The drones are used to collaborating with each other and value each others' input far more than they do tools provided from above. Similarly, management is incapable of understanding many of the problems which occur in the production process as they happen, supposing they even look for them at all.  Furthermore, getting the kind of feedback needed to iterate and improve is fast and straightforward between drones.

This is why much of the approach around things like Kaizen and Scrum focus on empowering the drone to streamline production themselves.  The concept is generally referred to as Metis, and it is valuable for management to periodically inspect and experiment with cross-pollination of this across divisions to increase productivity.

War story time

For those of you not familiar with me, I have a decade of experience automating QA processes and testing in general.
This means that the vast majority of my selling has been of two kinds:

  • Selling tactical/strategic/logistic intelligence reports
  • Selling colleagues on tools to improve their productivity

That said, I also wore "all the hats" in my startup days at hailstrike, and had to talk a customer down from bringing their shotgun to our office.
I handled that one reasonably well, as the week beforehand I'd read Carl Sewell's Customers for Life and Harry Browne's Secret of selling anything.
The problem was that one of the cronies of our conman CEO was a sales cretin there and promised the customer a feature that didn't exist and didn't give us a heads up.
It took me a bit to calm him down and assure him he was talking to a person that could actually help him, but after that I found out what motivated him and devised a much simpler way to get him what he wanted.
A quick code change, a deploy and call back later to walk him through a few things to do on his end to wrangle data in Excel and we had a happy camper.

He had wanted a way to bulk import a number of addresses into our systems and get a list of hailstorms which likely impacted the address in question, and a link into our app which would pull the storm map view immediately (that they could then do a 1-click report generate for homeowners).

We had a straightforward way of doing this for one address at a time, but I had recently completed optimizations that made it feasible to do many as part of our project to generate reports up to two years back for any address.
Our application was API driven and already had a means to process batched requests, so it was a simple matter of building an excel macro talking to our servers which he could plug his auth credentials into.
I built this that afternoon and sent it his way.  This started a good email chain where we made it an official feature of the application.

It took a bit longer to build this natively into our application, but before the week was up I'd plumbed the same API calls up to our UI and this feature was widely available to our customers.
I was also able to give a stern talking to our sales staff (and gave them copies of C4L and SSS) which kept this from happening going forward, but the company ultimately failed thanks to aforementioned conman CEO looting the place.

The war within

After that experience I went back to being a salaryman over at cPanel.  There I focused mostly on selling productivity tools internally until I transitioned into a development role.

I'd previously worked on a system we called "QAPortal" which was essentially a testing focused virtual machine orchestration service based on KVM.  Most of the orchestration services we take for granted today were in their infancy at that time and just not stable or reliable enough to do the job.  Commercial options like CloudFormation or VSphere were also quite young and expensive, so we got things done using perl, libvirt and a webapp for a reasonable cost.  It also had some rudimentary test management features bolted on.

That said, it had serious shortcomings, and the system essentially was unchanged for the 2 year hiatus I had over at hailstrike as all the developers moved on to something else after the sponsoring manager got axed due to his propensity to have shouting matches with his peers.
I was quickly tasked with coming up with a replacement.  The department evaluated test management systems and eventually settled on TestRail, which I promptly wrote the perl API client for and put it on CPAN.
The hardware and virtual machine orchestration was replaced with an openstack cluster, which I wrote an (internal) API library for.
I then extended the test runner `prove` to talk to and multiplex it's argument list over the various machines we needed to orchestrate and report results to our test management system.
All said, I replaced the old system within about 6 months.  If it were done today, it would have taken even less time thanks to the advances in container orchestration which have happened in the intervening time.  The wide embrace of SOAs has made life a lot better.

Now the team had the means to execute tests massively in parallel across our needed configurations, but not every team member was technical enough to manage this all straightforwardly from the command line.  They had become used to the old interface, so in a couple of weekends I built some PHP scripts to wrap our apps as an API service and threw up a jQuery frontend to monitor test execution, manage VMs and handle a few other things the old system also accomplished.
Feedback was a lot easier than with external customers, as my fellow QAs were not shy about logging bugs and feature requests.

I suspect this is a lot of the reason why companies carefully cultivate alpha and beta testers from their early adopter group of rabid fans.  Getting people in the "testing mode" is a careful art which I had to learn administering exploratory test sessions back at TI, and not to be discarded carelessly.  That is essentially the core of the issue when it comes to getting valid reports back from customers.  You have to do Carl Sewell's trick of asking "what could have worked better, what was annoying...", as those are the sort of user feedback that you want rather than flat-out bugs.  Anything which breaks the customers' immersion in the product must be stamped out -- you always have to remember you are here to help the user, not irritate them.

Rewarding these users with status, swag and early access was the most reliable way to weed out time-wasters; you only want people willing to emotionally invest, and that means rewards have to encourage deeper integration with the product and the business.  It also doesn't hurt that it's a lot cheaper and easier to justify as expenses than bribes.

Are ya winning son?

Measuring adoption of software and productivity ideas in general can be tricky unless you have a way to either knock on the door or phone home. Regardless of the approach taken, you also have to track it going forwards, but thankfully software makes that part easy nowadays.
Sometimes you use A/B tests and other standard conversion metrics, as I used extensively back at HailStrike.  I may have tested as much copy as I did software!  Truly the job is just writing and selling when you get down to it.

In the case of inter-organization projects most of the time it's literally knocking on the door and talking to someone.  At some level people are going to "buy" what you are doing, even if it's just giving advice.  This is nature's way of telling you "do more of this, and less of the rest".

I can say with confidence that the best tool for the job when it comes to storing this data is a search engine, as you eventually want to look for patterns in "what worked and didn't".  Search engines and Key-Value stores give you more flexibility in what IR algorithm best matches the needs of the moment.  I use this trick with test data as well; all test management systems use databases which tend to make building reports cumbersome.

Time Preference versus Subjective Value

Rather than flippantly dismiss the original question, I would like to revisit the problem.  While it is obvious that I will probably gain more over the long term by sacrificing my desire to do something fun instead of writing this article, one must also take into consideration the law of diminishing marginal utility and the Paradox of Value.  Thinking long term means nothing when one is insolvent or dead without heirs tomorrow.  There will always be an infinite number of possible ends for which I sacrifice my finite means.  As an optimization problem, it is NP hard.  The best we can do is to use the Kelly Criterion to distribute our time and other assets wisely among the opportunities we best understand the risks about.

Building an online reputation is quite expensive and time consuming, but is beginning to pay off.  It doesn't hurt that I'm pursuing multiple aims simultaneously (building a MicroISV product, chasing contracts) with everything I write these days.  That said it cannot be denied that hanging out your shingle is tantamount to a financial suicide mission without multiple years of runway.  Had I not spent my entire adult life toiling, living below my means and not taking debts, none of this would be possible.  In many ways it's a lot like going back to college, but the hard knocks I'm getting these days have made me learn a whole lot more than a barrel full of professors.

For those who insist on the technical answer to this question, I would direct you to observe the design of Selenium::Client versus that of Selenium::Remote::Driver.  This is pretty much my signature case for why picking a good design from the beginning and putting in the initial effort to think is worth it.  My go-to approach with most big balls of mud is to stop the bleeding with modular design.  Building standalone plugins that can ship by themselves was a very effective approach at cPanel, and works very well when dealing with Bad and Right systems.  What is a lot harder to deal with is "Good and Wrong" systems, usually the result of creationist production.  When dealing with a program that puts users and developers into Procrustes' bed rather than conforming to their needs you usually have to start back from 0.  Ironically most such projects are the result of the misguided decision to "rewrite it, but correctly this time".

Given cPanel at the time was a huge monorepo sort of personifying "bad design, good execution", many "lets rewrite it, but right this time" projects happened and failed, mostly due to having forgotten the reasons it was written the way it had been in the first place.  New versions of user interfaces failed to delight users thanks to removing features people didn't know were used extensively or making things more difficult for users in the name of "cleaner" and "industry standard" design.  A lot of pain can be brought to a firm when applying development standards begins to override pleasing the customer.  The necessity of doing just that eventually resulted in breaking the monolith to some extent, as building parallel distribution mechanisms was the only means to escape "standardization" efforts which hindered satisfying customer needs in a timely manner.

This is because attempting to standardize across a monorepo inevitably means you can't find the "always right" one-size fits-all solution and instead are fitting people into the iron bed.  The solution of course is better organizational design rather than program design, namely to shatter the monolith.  This is also valuable at a certain firm scale (dunbar's number again), as nobody can fit it all into their head without resorting to public interfaces, SOA and so forth.  Reorientation to this approach is the textbook example of short-term pain that brings long-term benefit, and I've leveraged it multiple times to great effect in my career.

25 most recent posts older than 1619710468
Size:
Jump to:
POTZREBIE
© 2020-2023 Troglodyne LLC