Choosing Between Schedule, Budget, Scope, and Quality

Yaacov Apelbaum-Choosing Between Options

Argumentum ad populum is that the best way to beat your competitor is to one-up him. If he has 2 features, you need 3 (4 would be even better). If he is budgeting 1 x $  on his solution, you need to spend 2 x $.

This arms race style upmanship is a dead-end. It’s a prohibitively expensive, reactionary, and is a schizophrenic paranoid way of developing software. Organizations following this philosophy can’t plan for the future, they can only react to the past. They don’t lead by innovation, they follow by imitation. If you want to rapidly build a leading solution and not merely mimic existing products, you need to focus on innovation and quality.

The traditional Project Management iron triangle model proposes that every project is controlled by these four elements:

  1. Schedule
  2. Budget
  3. Scope (features and Functionality)
  4. Quality

Yaacov Apelbaum - Iron Triangle of Project Management

Illustration 1. The PM Iron Triangle

Some project managers believe that you can only pick three out of the four variables. I don’t agree, because quality should never be compromised. Would you buy a new car knowing that it has a defective engine or transmission? Most likely not, regardless of how many other features it has or if it shipped on time to the dealer. Quality is not a variable and is a ‘given’, and as such, it should always be baked into the the product.

I’ve seen numerous projects who compromised on quality by reducing unit test coverage and then engaging in creative bug severity reclassification, just to catch up with a slipping shipping date. In the end, no amount of Bondo and marketing spin can cover up the quality holes in your delivery.  Still not convinced, check out the annals of software development screw-ups, they are full of product humdingers that shipped on time, on budget, and scope, but failed miserably because of substandard quality.

So what do you do when the going gets tough? How do you make that difficult decision and chose the right sacrifice for the greater good? And while at it, what is the greater good anyway?

In this posting, I’ll share with you a few insights into selecting the lesser of two evils, and some specific project hostage negotiation techniques that may help you get out of some tight delivery spots.

The Project Web
On the surface, each of the four options from the list above seems to be equally weighted, but in reality, It’s not as simple. In a dynamic living and breathing software project, the schedule, budget, and scope actually carry different degree of importance and impact each other in subtle ways.

Yaacov Apelbaum-Choosing Between Options Pyramid

Illustration 2. The Project pyramid

It turns out that rather than a interwoven triangular knot that is made up of equal sides that can be resized and traded equally, the project elements actually resemble an intricate spider web (illustration 3). This structure in turn is controlled by many variant and dynamic factors. As you apply force (i.e. poll or push) to any strand in this web, the whole web flexes and deforms unevenly in multiple axis.  Each change you apply to the project (adding scope, increasing the budget, etc.) will ripple through the entire project structure like a pebble dropped in a pond.

Yaacov Apelbaum-Project Option Grid

Illustration 3. The Project Element Web

As you can see in Illustration 4, increasing the budget and decreasing the schedule will adversely impact the schedule, scope and quality. This action will also disproportionally deform all project axis and introduce a negative self-reinforcing feedback loop. This is because the team will now need to quickly add more development resources to complete the additional work, this will result investing less time on actual development on more time on knowledge transfer, which will introduce more bugs, which will ultimately result in a reduction of the overall quality of the software.

Yaacov Apelbaum-Project Options Budget Increase Schedule Decrease

Illustration 4. The Project Element Web – Impact of Budget Increase and Schedule Decrease

This is counter-intuitive to popular wisdom which states that increasing the budget for the project can allow you to complete the work faster with more functionality.

The Project Annihilator Shooter
Software product development lifecycle is in many ways similar to a shooter video game. You play each scenes collecting, oxygen, ammunition, and kill points. If you are successful and survive enough missions without losing your vitals (i.e. reputation, stamina, leadership, etc.), you get to move to the next level and play again. If you run out of ammunition, oxygen, or your life gauge falls below some critical level, you’re out of the game.

The Project Annihilator Dashboard

Following this corollary, your ability to continue to play the software development equivalent of Metro: Last Light, depends on your ability to juggle all the elements that go into your software project.

The following are your variables:

Ammunition/Oxygen = Funding/Budget
Scene Length = Schedule and drop timeline
Number of Targets per Scene= Number of Features of your project

Here’s an easy way to launch on time and on budget: keep features minimal and fixed. Never throw more time or money at a problem, just scale back the scope. There’s a myth that states that: you can launch on time, on budget, and on scope, while keeping quality in check. Well, it’s nonsense, it never happens, and when you try, quality always takes a hit.

If you can’t fit everything in within the allotted time and budget then don’t expand the time and budget. Instead, scale back the scope. You will always have time to add more content later–later is eternal, now is short-lived. Launching early a product that’s smaller is better than launching a solution that is full of bugs just because of some magical date. Leave the magic to Siegfried, Roy and, their tigers. Your customer will judge you on how well your features work, not on how many of them you’ve delivered.

Still not convinced, here are some more benefits of fixing time and budget, and keeping scope flexible:

Keep Scope Real and Relevant
Less is more. Less code, less documentation, less regression testing. Remove all but the essentials. Trust me, most of what you think is essential is actually not. Keep your solution small and agile.

Use rapid iterations to lower the cost of change. And believe me, if you plan to deliver a functional product and are soliciting your users for input, there is going to be a lot of change. Launch frequently, tweak, launch again, constantly improving your solution will help you deliver just what customers need and eliminates anything they don’t.

Breakdown Delivery to Phases and Prioritize Content 
This is an important exercise to go through regardless of project constrains. You have to figure out what’s important for your customer and how you can deliver the most critical functionality in multiple phases. Not being tied to a single monolithic delivery will force you to make some tough decisions, but will also eliminate the need to prostrate yourself in front of the war console.

It may sound corny, but having a portion of a working product in the customer’s machine is by far better than having a full poorly functional solution on your build server.

Be Flexible
The ability to quickly change is paramount for your success. Having the scope, budget, and schedule fixed makes it tough to change. Injecting scope flexibility will allow you to be opportunistic in resolving technology and operational unknowns.

Remember, flexibility is your friend. so de-scope, reduce features, and break your monolithic delivery plan to smaller drops. In the end, there is no escaping the proverbial SDLC grim ripper and the fact that you will have to fold. The sooner you show your cards, the better you’ll be in the long run.

Dump the Ballast
Delivering less content will allow you to reduce all the fluff that represents managing upwards (i.e. lengthy Power Point presentations, the fictional development progress reports, etc.) and force you to focus on building the real McCoy.

In larger organizations, weeks can be spent on planning roadmaps and arguing about the fine details of each feature with the goal of everyone reaching consensus on what would be the ‘best’  solution.

That may be the right approach for a $$$ mainframe product, but if you are developing in internet time, fast quality shipping is the way to go. You can always trust the user to tell you if your scope is sufficient. If it’s not, you can always fix it in the next phase and re-ship it again.

When it comes to scope, there is no judgment better than that of the customer’s–resist the urge to engage in long-winded scheduling and planning and especially stay away from:

  • Senseless hundreds of page of slide ware roadmaps
  • Schedules that depend on a multi–year project plans
  • Pristine roadmaps that predict the future (in terms of market and customer needs)
  • Pie-in-the-sky solutions (technology that is immature or non-existing)
  • Reliance on never-ending status/staff meetings and project reviews
  • Dependency on hiring dozens of new resources in order to meet the current increase in scope
  • Endless version, branching, and integration debates

To succeed in building competitive software, you don’t need an excessive budget, a large team, or a long development cycle. You just need to simplify your solution and deliver it to the customer as fast as possible with the highest quality.

Or as Steve Jobs said it “That’s been one of my mantras – focus and simplicity. Simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end once you get there, you can move mountains”.

© Copyright 2013 Yaacov Apelbaum. All Rights Reserved.

The API Train Wreck Part-2

ApelbaumYaacov API Train Wrack 2

In The API Train Wreck Part-1, I discussed API design factors such as KPI, performance measurements, monitoring, runtime stats, and usage counters.  In this posting, I’ll discuss the design factors that will influence your API’s load and traffic management architecture.

Load and Traffic Management
One of the key issues that you are going to face when opening your API to the world are the adverse effects on your own front-end and back-end systems.  If you are using a mixed model, where the same API is used for both internal line of business applications and external applications, uncontrolled bursts of traffic and spikes in data requests can cause your service to run red hot.

In a mixed model, as the number of external API users increases, it’s only a question of time before you will start experiencing the gremlin effects which include brownouts, timeouts, and periodic back-end shutdowns. Due to the stateless nature of SOA and the multiple points of possible failure, troubleshooting the root cause of these problems can be a nightmare.

This is why traffic management is one of the first and most important capabilities you should build into your API. A good example of this type of implementation can be found in the Tweeter REST API Rate Limiting. Twitter provides a public API feed of their stream for secondary processing. In fact, there is an entire industry out there that consumes, mines, enriches, and resells Twitter data, but this feed is separate from their internal API and it contains numerous traffic management features such as: caching, prioritize active users, adapting to the search results, and blacklisting to insure that their private feed will not be adversely impacted by the public API.

Bandwidth Management
Two of the most common ways to achieve efficient bandwidth management and regulate the flow are via (1) throttling and (2) rate-limiting.  When you are considering either option, make the following assumptions:

  • Your API users will write inefficient code and only superficially read your documentation. You need to handle the complexity and efficiency issues internally as part of your design.
  • Users will submit complex and maximum query ranges.  You can’t tell the user not to be greedy, rather you need to handle these eventualities in your design.
  • Users will often try to download all the data in your system multiple times a day via a scheduler. They may have legitimate reason for doing so, so you can’t just deny their requests because it’s inconvenient for you. You need to find creative methods to deal with this through solutions like of-line batch processing or subscription to updates. 
  • Users will stress test your system in order to verify its performance and stability. You can’t just treat them as malicious attackers and simply cut off their access. Take this type of access into account and accommodate it through facilities like a separate test and certification regions.

In order to be able to handle these challenges, you need to build a safety fuse into your circuitry.  Think of Transaction Throttling and Rate Limiting as a breaker switch that will protect your API service and backend.

Another reason to implement transaction throttling is because you may need to measure data consumption against a time based quota.  Once such mechanism is in place, you can relatively easily segment your customers by various patterns of consumption (i.e. data and time, volume, frequency, etc.).  This is even more relevant if you are building a tier based service where you charge premium prices per volume, query, and speed of response.

Rationing, Rate Limitations, and Quotas
OK, so now we are ready to implement some quota mechanisms that will provide rate limits. But just measuring data consumption against daily quotas can still leave you vulnerable to short bursts of high volume traffic or if done via a ‘per second ‘ rate limits can be perceived as a non-business friendly.

If your customers are running a high volume SaaS solution and are consuming your API as part of their data supply chain, they will probably find it objectionable when they discover that you are effectively introducing pre-defined bottlenecks into their processing requests.

So even if you consider all data requests equal – you may find that some requests are more equal than others (due to the importance of the calling client) or that they contain high rate transactions or large messages, so just limiting ‘X requests per second’ might not be sufficient.

Here are my core 13 considerations for designing the throttling architecture.

  1. Will you constrain rate of a particular client access by API key or IP address?
  2. Will you limit your data flow rate by user, application, or customer?
  3. Will you control the size of messages or the number of records per message?
  4. Will you throttle differently your own internal apps vs. public API traffic?
  5. Will you support buffer or queue based requests?
  6. Will you offer business KPIs on API usage (for billing, SLA, etc.)?
  7. Will you keep track of daily or monthly usage data to measure performance?
  8. Will you define different consumption levels for different service tiers?
  9. How will you handle over quota conditions (i.e. deny request, double bill, etc.)?
  10.   Will you measure data flow for billing and pricing?
  11.   Will you monitor and respond to traffic issues (security, abuse, etc.)?
  12.   Will you support dynamic actions (i.e. drop or ignore request, send an alert, etc.)?
  13.   Will you provide users usage metadata so they can help by metering themselves?

Obviously, no single consideration in the list above will singlehandedly control your design. During your deliberation process, dedicate some time to go over each of the 13 points. List the pros and cons for each and try to estimate the overall impact on the development scope and overall project timeline.

For example, evaluating  “1. Constrain rate of a particular client access by API key” will yield the following decision table:

Pros for Key Management

Cons for Key Management

Impact

Conducive to per-seat and volume licensing models and can support both enterprise and individual developers.

Need to invest in the setup and development of a key lifecycle management system.

Effort is estimated at x man months and will push the project delivery date by y months.

Tighter control over user access and activity.

Need to provide customer support to handle key lifecycle management support.

Effort will require to hire/allocate x number dedicated operational resources to support provisioning, audit, and reporting.

Conducive to a tiered licensing and billing model including public, government, and evaluate before you buy promotions.

Managing international users will require specialty provisioning and multi-lingual capabilities.

Will complement and support the company’s market and revenue strategy.

Will scale well to millions of users.

Initial estimate of number of customers is relatively small.

 

Resist the temptation to go at it ‘quick and dirty’. Collect as much input as possible from your front-end, middle tier, back-end, operations, and business stakeholders. Developing robust and effective throttling takes careful planning and if not done correctly can become the Achilles heel of your entire service.

The above pointers should cover the most critical aspects of planning for robust API traffic management.

© Copyright 2013 Yaacov Apelbaum. All Rights Reserved.

The API Train Wreck Part-1

ApelbaumYaacov API Train Wrack

Developing a commercial grade API wrapper around your application is not a trivial undertaking. In fact, in terms of effort, the development of an outbound facing API can dwarf all combined application development tasks by an order of magnitude.

I’ve seen some large development organizations attempt to open their internal platform to the world just to watch in horror as their core systems came to a screeching halt before ever reaching production.

In this and a follow-up posting, I’ll try to provide you with some hands-on tips about how you can improve the quality of your API and more efficiently ‘productize’, ‘commercialize’, and ‘operationalize’ your data service.

It’s 9 PM, do you know who’s using your API?
Do you have accurate and up-to-date KPIs for your API traffic and usage? Are these counters easily accessible or do you need to troll through numerous web server and application logs to obtain this data? Believe it or not, most APIs start as orphaned PoCs that were developed with the approach of ‘do first, ask permission later’.  Things like usage metrics are often put on the back burner until project funding and sponsorship roll in.

Paradoxically, due to their integrative effect, application APIs can provide the biggest boost to traffic and revenue, but at the same time they can become your biggest bottleneck.

So my first tip is to build usage information into your core functions. Before you embark on an API writing adventure, your product management and development teams should have clear answers to these questions:

How many clients, applications, and individual developers will use your API?

  • Will you offer public and private versions of it?
  • What will the distribution of your clients, apps, and developers by type and location be?
  • Who will the the top tier customers be? Developers? Partners?
  • What parts of the API will be used most heavily?
  • What will the traffic breakdown between your own services and 3rd party services be?
  • What will the aggregate per application and per customer transaction volumes be?
  • How fast and reliable will our service be, (i.e., response, latency, downtime, etc.)?
  • What are the architectural and technical limiting factors of your design (i.e. volume)?
  • How will the API be monitored and SLAd?
  • What type of real-time performance dashboards will you have?
  • How will the public facing API usage effect the rest of your platform, (i.e., throttling)?
  • Will you handle error routines generically or identify and debug specific messages?
  • Do you need to support auditing capabilities for compliance, (i.e SOX, OFEC, etc.)?
  • What will the volume of the processed/delivered data be, (i.e. from a third parties)?

Asking these questions—and getting good answers to them—is critical if you plan to develop a brand new SaaS platform to expose your internal platforms to the world.

This information is also critical to your ability to establish contractual relationships with your internal users and external customers. Depending on the type of information you serve, it may be also mandatory for compliance with state and Federal regulations.

Collecting the correct performance metrics and mapping them to your users and business functions will help you stay one step ahead of your user’s growing demands.

© Copyright 2013 Yaacov Apelbaum. All Rights Reserved.

Just Say No to Features

Yaacov Apelbaum-Just Say No

A quick feature inventory of most “mature” commercial software products like Microsoft Word, Lotus Notes, etc. reveals that more than half of their features are either never accessed or are outright useless (the historical evolution of the Microsoft Word tool ribbon is an example of feature bloat). If you are developing software in a startup, useless features will be the death of you and your product. The secret to developing the right set of features is learning how to say no to the rest.

Yaacov Apelbaum - Word 1.0 ribbon

Yaacov Apelbaum - Word 6.0 ribbon

Yaacov Apelbaum- Word 2003 ribbon

Yaacov Apelbaum - Word 2003 ribbon

Yaacov Apelbaum - Word 2013 ribbon
Figure 1. Bloat progression in Microsoft Word from version 1.0 to 2013

Whenever you say “yes” to a new feature, you agree to adopt a baby. And with the baby come such responsibilities as changing it’s diaper, waking up at 3 AM to feed it, and paying for its education (e.g. architecting, developing, integrating, and testing).

To hit the market early and within budget, your feature development strategy must focus on creating the bare essential functionality. So in this vein, stay away from developing a Swiss arm knife with dozens of capabilities solutions.  You should make each feature prove itself to be a “worthy survivor”. Your features need to be tough, lean, and mean. Embrace the SEAL’s “hell week” screening approach before letting any one of them into your development cycle.

Yaacov Apelbaum - Swiss Army Knife Bloat

Whenever product, sales, marketing, or even the CEO requests a feature–it should instinctively meet a resounding no. If you don’t feel comfortable saying no, try a variation along the lines of “not now honey, I have a splitting headache” or “I’d love to chat about this feature, but I’ve got to run into a day long meeting right now”. If you are cornered and can’t escape, listen and take some notes, but stop there. There is no need to be heroic or do anything just yet.

If a request for a particular feature keeps surfacing over and over again from different sources, that’s when you know it’s time to take a closer look at it. Only at this point should you start considering it seriously.

What do you say to the product team that complains vocally that you won’t adopt the top one hundred features on their wish list? Remind them that developing even the most basic feature is prohibitively expensive. Alternatively, you can use the following list of tasks to illustrate the amount of work needed to insert a simple image on a MVC CMS driven web page:

To develop this feature, you need to (times are in minutes)…

  • Hold a management meeting to discuss it (30 min.)
  • Hold a feasibility meeting with the technical teams (30 min.)
  • Develop a schedule and if you use SCRUM work the feature into a sprint (30 min.)
  • Develop screen wireframes (60 min.)
  • Develop HTML mockups (60 min.)
  • Develop a POC (120 min)
  • Allocate resources that are working on other features (30 min.)
  • Develop an analysis and design document for system impact, (30 min.)
  • Create a code branch (30 min.)
  • Write unit tests (120 min.)
  • Write the code (120 min.)
  • Peer review the code (30 min)
  • Merge the code back into trunk (30 min.)
  • Test, revise, test, revise…(60 min.)
  • Modify user documentation (30 min.)
  • Update the product guide (30 min.)
  • Update the marketing and engineering collateral (60 min.)
  • Refactor product cost and check to see if pricing structure is affected (60 min.)
  • Deploy to production (30 min.)
  • Cross you fingers and hope that everything worked out as planed
  • Total development cost = two days of work

As you can see from these steps, even the most basic feature request can cost two days of planning, design, development, testing, and documentation effort. On average, feature tend to be 5 to 10 times more complex.

Sometime User Feature Request Can be Evil 
What about using your customers as the main driver for feature development? Well, this strategy has several small flaws as well. Your customers want everything under the sun, yesterday, and for free. You shouldn’t fault users for making feature requests. I encourage our customers to speak up and I diligently collect their input. In the end, most of our functionality comes from user requests.  But as I have indicated, even for user feature requests, my first response is to always say no. So what do I do with all these new customer feature requests? I read them, try to forget them and finally, for safe measure, I putn them in a backlog file.

It sounds terrible, I know, but don’t worry, if the feature request is important enough, it will keep on coming back and you won’t have any difficulty remembering it then. These persistent features will be the ones essential to your products that you will need to prioritize and implement.

It may not be apparent, but most software feature surveys revolve around questions like “What features are missing in our product?” or “What what would be the one feature you couldn’t live without?” or “What would make this product a killer app?”. Rather than continue to ask for more in this way, the questions that you should be asking are “If you could remove one feature, what would it be?” or “How would you simplify the product?”. Sometimes the best thing you can do for you product is to develop less.

Now go ahead and practice your newly acquired skill of just saying no.

© Copyright 2011 Yaacov Apelbaum All Rights Reserved.

The Startup Leap to Success

Yaacov Apelbaum-The Startup Product Leap

One of the most challenging periods for any startup is passing through the “Valley of Death”. During this delicate phase, the organization’s burn rate is high and it has to rapidly achieve the following three goals:

  1. Move from a proof of concept (POC) to a functional commercial product
  2. Reach a cash flow break even
  3. Transition form seed\angel funding to venture capital funding

For startups focusing on the development of SaaS products, this phase also marks an important millstone in the maturity of their product. With increased volume of production users comes stricter SLA’s and the need to implement more advanced operational ability in areas such as: change control, build automation, configuration management, monitoring and data security.

Yaacov Apelbaum-Startup Financing Cycle

If you are managing the technology organization in an early stage startup, you have every reason to be concerned. To the outsider, the success and failure of startups often seems to be shrouded in mystery–part luck part black magic.  But ask a seasoned professional who has successfully gone through the startup meat grinder and he will tell you that success has nothing to do with luck, spells, or incantations.

Having worked with a number of startups, I have come to conclude that the most common reasons for product failure (beyond just not being able to build a viable POC) is the inability to control your product’s stability and scalability.

In the words of Ecclesiastes, there is a time and purpose for everything under heaven.  In the early stages of a startup’s life cycle,  process is negotiable.  Too much process may hinder the speed in which you can build a functional POC.  In later stages, reliable process and procedures (e.g. requirements, QA, unit testing, documentation, build automation, etc., ) are critical. They are the very foundations of any commercial grade product.  Poor quality and performance are self evident and no matter how much marketing spin and management coercion you use, if you are trying to secure an early stage VC funding round, your problems will rapidly surface during the due diligence process.

To avoid the startup blues, keep your eyes on the following areas. Factoring them into your deployment will help you deliver on time and on budget, with the proper scalability and highest quality possible.

Design Artifacts
Before converting your POC to a functional product, take the time to design your core components (i.e. CRM, CMS, DB access, security, API, etc.).  Create a high level design that identifies all major subsystems.  Once you know what they are, zoom into each subsystem and provide a low level design for each these as well.

  • Resist the temptation to code core functionality before you have a solid and approved scalable architecture (and the documentation for it). 
  • Let your team review and freely comment about the proposed platform architecture and deployment topology.  Just because a vocal team member has religious technology preferences doesn’t mean that everyone should convert.
  • No matter how good your technical staff is, when it comes to building complex core functionality (transaction engine, web services API, etc,) don’t give any single individual carte blanche to work in isolation without presenting their design to the entire team.
  • Document the product as you develop it. Building a complex piece of software without accurate documentation is akin to trying to operate a commercial jet without its flight manual.
  • To help spread the information and knowledge, establish a company-wide document depository (like a Wiki or SharePoint ) and store all your development and design documents under version control.  Discourage anyone from keeping independent runaway documents of the system.
  • Maintain an official (and versioned) folder for the platform documentation showing product structure and components, development roadmaps, and technical marketing materials. 

Testing and QA
If you are not writing unit tests you have no way to verify your product’s quality. Relying on QA to find your bugs means that by the time you do (if ever!) it will be too late and expensive to fix them.  Spend a little extra time and write unit tests for every line of code you deploy in production.  When refactoring old code, update the original unit test as well.

Just like most things in life, bugs have a lifecycle, they are born, they live and die.  Effectively tracking them as part of your build and QA process is a prerequisite for their timely resolution.  

If you are discovering a high critical bug count in your “code complete release” (half a percent of source code e.g. 500 bugs for a 100,000 line code base), you may not be production ready.  Stop further deployment and conduct a thorough root cause analysis to understand why you have so many issues. 

If your bug opening/closure rate remains steady (i.e. QA is opening bugs at the same rate development is closing them) and you have reoccurring bug bounces, you may need to reassess the competency of your development resources. This would also be a good time to have a serious heart to heart conversation with the developers responsible for the bugs. Be prepared for some tough HR decisions.

Monitoring and Verification
Just like you wouldn’t drive a car without a functional dashboard, you can’t run quality commercial software without real time visibility into its moving parts.  Implement a monitoring dashboard to track items such as daily builds (and breaks), servers performance, users transactions, DB table space, etc. 

Seeing is believing. Products like Splunk can help you aggregate your operational data.  Once you have this information, show it to your entire team. I personally like to pump it onto a large screen monitor in the development areas so everyone can get a glimpse.

Yaacov Apelbaum-Splunk Monitoring
Image 1: Splunk Dashboard in Action

Security, Scalability and Operations
Unless you are in the snake oil sales business, build your production environment from the get-go for scalability, security, and redundancy.  Don’t look for “bargains” on these technologies, leverage commercial-grade load balancers, firewalls, and backup solutions.

Your production environment is critical to your success, so don’t treat it as a second class citizen or try to manage it with part time resources. As you will quickly discover, a dedicated sys admin and a DBA who know your platform intimately are worth their weight in gold.

You must achieve operational capabilities in build automation, release management, bug tracking, and configuration management before going live.  If you don’t, be prepared to spend most of your productive time fixing boo-boos in the wee hours of the night.

Implementing many of the above mentioned measures will give you a significant tactical advantage as well as a strategic boost when negotiating with potential VCs.  Having these capabilities on your utility belt will also help you calmly face any future issues as your startup matures.

 

© Copyright 2011 Yaacov Apelbaum All Rights Reserved.

Crafting Great Software Features Part-2

Yaacov Apelbaum-Sleep Master
The Sleep Master 7000SX: It captures and Tweets all your sleep stats while you snooze!

In his book, “The Diamond Age,” Neal Stephenson classifies technologists as belonging to one of two categories: (1) those who hone existing ones and (2) those who forge and create new ones.

There is a fundamental difference between how ad hoc assemblers and software crafters approach building a product. Ad hoc assemblers tend to start with the technology and the solutions it offers.  They speak in terms of using a framework, language, or protocol to solve a problem.  They frequently make statements like “the next version of X will solve the problem of Y”. Ad hoc assemblers who frequently suffer from myopic vision will solve customer needs first by boxing some existing technologies together and then by shoehorning a GUI on top of it.  The sum of the feature and functionality will be driven entirely by the framework of the underlying technology.

This almost always results in marginal user experience and product performance. The solution isn’t designed for ease of use. It’s not even intended to solve any concrete problem. Its primary purpose is to act as a vehicle for marketing hype and sales. You can recognize these products by the emphasis they places on mile long lists of features, most of which are poorly implemented and are of little commercial value.

It’s a simple value proposition: it is more important for some neat technologies to be shipped (the latest buzz is cloud computing and social networks)  than for products to be useful. Very few great products are designed this way.

Software crafters, on the other hand, understands that real people will consume their product, and every decision about its design is made not only with a specific user in mind but also the specific problems the user needs to solve in mind. This and only this drives the choice of platform, language, communication protocol or database.

Your customers are no different than the people who are looking to buy a specific tool for a job. To deliver the right product functionality without getting lost in the technology jungle, you need to develop an understanding of how successful products are developed in other fields.

Manufacturers of tools and appliances all go through the same steps in balancing technology, requirements and usability. You can learn a lot from the successes of products like the IPod by recognizing that when we buy a product, we almost never care about unnecessary “fluff” features (like a social network enabled timer that can capture 4 different types of sleep statistics).  Rather, what we want is what provides valuable features (services) and perform them well.

 

© Copyright 2010 Yaacov Apelbaum All Rights Reserved.

Ode to The Code Monkey

 Yaacov Apelbaum-Code Monkey  
The Code Monkey (inspired by A Dream Within A Dream by Edgar Allan Poe)

Take another slap upon the cheek,
While slaving on this project, week by week. 
You have been wrong to work so hard,
Expecting riches and managerial regard.
Grinding out functions awake and in a dream,
Will not fetch rewards or professional esteem.

What you lack are not more lines of code, 
Rather it’s architecture and a road.
To substitute quality with speed,
Is the motto of the code monkey creed. 
You who seek salvation in RAD extreme
Will find, alas, a dream within a dream.

If you examine your latest stable build,
You will notice many bugs that haven’t been killed.
Strangely, they seem to grow in relation,
To your oversized code base inflation.
So many new features! How did they creep? 
Through scope expansion, they trickle deep.

Building good software is hard to manifest,
If you fail the requirements to first digest.
The lesson to learn from this development ditty,
Is that no matter how clever you are or witty,
If you fudge the schedule and estimation phase,
There is but one reward for you. The death march malaise!

© Copyright 2010 Yaacov Apelbaum All Rights Reserved.