Finovate Spring 2011 and the 3G Blood Bath

Yaacov Apelbaum-Billboard Finovate Spring 2011

Several weeks ago, I had the opportunity to demo at Finovate Spring 2011. In the past, I have presented at a variety of professional conferences such as Microsoft PDC and IEEE, but preparing and presenting at Finovate was a real eye-opener for me.

In our presentation, I showcased the platform through several complex interactions. As an illustration, we decided to follow a “day in the life of an average teenage user”.  Our user, while cruising around town, utilized his mobile device (a Google Nexus-S running Android 2.3) to perform the following tasks:

  • Get product information by scanning the barcode with the built-in camera
  • Receive budgetary advice (“You don’t have enough money in your account to buy this camera. Would you like to create a goal for it?”
  • Create goals on the fly, and post them to a social network. (“Hey everyone, my birthday is coming up. I only need $20 more bucks to buy this great camera!”)
  • Get a chip-in from a member of his social network (who clicked on the Facebook link, was taken to the chip-in page, and used a credit card to contribute $20
  • Locate the best retail deal and store based on price, availability, and location, utilizing the phone’s Goelocation ability
  • Complete the transaction at the retail POS using the phone’s NFC capability

In addition to actually demoing all of these features, I had to allocate enough time during the presentation to talk about data security, encryption, and authentication, as well as to explain how the real-time analytics and business intelligence engines monitored and interacted with the user.

If you’ve never been to Finovate, then you might not know that the demonstrations are attended by the cream of the crop of financial innovators and the banking industry.  You can’t pull any wool over their eyes, they’re too savvy; your demo has to be perfect. For many Fintech startups, a successful Finovate demo is one of the best ways to get their name around, secure a major strategic partnership, and even get VC funding.

Yaacov Apelbaum-Finovate Spring 2011 Presentation 1 Yaacov Apelbaum-Finovate Spring 2011 Presentation 2

This year, there were over 850 people in the audience; the place was packed.  Due to the condensed nature of the conference, each demo was required to be exactly 7 minutes long. When your 7 minutes is up, the bell rings, the lights go off, and you get swiftly kicked off the stage so as to clear room for the next presenter.

The whole event is a strange combination of high-tech magic show, circus act, and speed dating.  Following the philosophy that there is no such thing as bad press, it’s not unusual to have a presenters accompany their product demo while playing a ukulele solo or performing a juggling act.

Knowing how challenging the time and content delivery requirements were, we laid out the demo components eight weeks before the presentation and then on a daily basis, we spent an hour practicing it in front of our peers. As the rehearsals progressed, we improved our timing, streamlined the script, and tweaked the presentation to make it more concise.

The day before the conference, we arrived to the presentation hall the and got on stage for the final dress rehearsal and to test the AV equipment and connectivity.

Yaacov Apelbaum-Finovate 2011 Presenter ListDuring the rehearsal, as I placed the presentation phone on the podium, I noticed that it suddenly lost the 3G network signal.  I moved the phone off the podium and the signal came back.  Clearly, you can’t run a live demo if you don’t have connectivity.  After the rehearsal backstage, I ran into the network guy and asked him why there was such poor 3G connectivity on stage. The man just shrugged his shoulders and said, “It’s a metal building. Your best bet is to connect to the Finovate wireless network tomorrow.”

The next day, thirty minutes before our 1:16 PM demo, we arrived backstage to gear up. I again checked all connectivity and verified that I was still on the network.  It didn’t occur to me to check what wireless network I was actually connected to.

After handing-in all of our equipment to the Finovate staff, we just stayed backstage and watched the presenters go at it.  It turned out to be a blood bath.

One company demoing an iPhone version of their browser app was doing great until they tried to actually login from the device (using 3G).  After 30 seconds of failed attempts they made the strategic decision to continue without the mobile app and instead they narrated what the app was supposed to do.

Yaacov Apelbaum-Finovate Spring 2011 ConferanceAnother company demoing their revolutionary banking web portal (using a laptop with a 3G USB wireless network card) also went up in smoke as they soon discovered that they couldn’t login into their own site. Their CTO, in an attempt to save the day (still apparently thinking it was some kind of misconfiguration issue), tried to reconfigure the proxy settings on his laptop, forgetting that he was sharing his screen with 850 people.  The audience got treated to his administrative user ID, password, and firewall settings.

This went on and on. One after another, the 3Gers went down like flies. Almost every iPhone app demo using 3G ended up with some critical connectivity problem.

Then, it was our turn. I got on stage, and instinctively looked at the wireless network one more time.  To my horror I noticed that I had almost no reception and that my laptop was strangely connected to a network called “Coffee House”.  “Strange,” I thought to myself, “why would Finovate name their network “Coffee House?” It took me another few seconds to realize that I was connected to the wrong network.  Next, I looked at the demo phone but it was still connected to the “Finovate” network.  You can’t run a demo with only fifty percent connectivity!

As the announcer was introducing us, I noticed a LAN cable on the podium, figuring that at that point I had nothing to lose, I plugged the LAN cable into my laptop and quickly launched the browser. After what seemed like an eternity and just as my partner began the presentation, the home page loaded. What a close call!

Our demo itself went down like a fine Merlot. The pages loaded instantly, the phone transmitted without any issues, and we even finished presenting with a few seconds to spare.  On the way out, I asked the network technician why he hadn’t warned all the presenters that the 3G was flaky. He looked at me with a twinkle in his eye and pointed at a large sign on the wall that read:

“To all presenters, due to the fact that we are located in a metal building and can’t guarantee 3G connectivity on stage, please utilize the “Finovate” wireless network! We will be happy to configure your devices for you.”

 

© Copyright 2011 Yaacov Apelbaum All Rights Reserved.

Big O Notation

Yaacov Apelbaum-big-o-and-efficiency

Recently, I was chatting with a friend of mine about pre-acquisition due diligence. Charlie O’Rourke is one of the most seasoned technical executives I know. He’s been doing hardcore technology for over 30 years and is one of the pivotal brains behind WU/FDC’s multi-billion dollar payment processing platforms. The conversation revolved around a method he uses for identifying processing bottlenecks.
 
His thesis statement was that in a world where you need to spend as little as you can on an acquisition and still turn profit quickly, problems of poor algorithmic implementations are “a good thing to have”, because they are relatively easy to identify and fix.  This is true, assuming that you have his grasp of large volume transactional systems and you are handy with complex algorithms.

In today’s environment of rapid system assembly via the mashing of frameworks and off-the shelf functionality like CRM or ERP, the mastery of data structures by younger developers is almost unheard of.

It’s true, most developers will probably never write an algorithm from scratch. But sooner or later, every coder will have to either implement or maintain a routine that has some algorithmic functionality. Unfortunately, when it comes to efficiency, you can’t afford to make uninformed decisions, as even the smallest error in choosing an algorithm can send your application screaming in agony to Valhalla.

So if you have been suffering from recursive algorithmic nightmares, or have never fully understood the concept of algorithmic efficiency, (or plan to interview for a position on my team), here is a short and concise primer on the subject.

First let’s start with definitions.

Best or Bust:
An important principal to remember when selecting algorithms is that there is no such thing as the “best algorithm” for all problems. Efficiency will vary with data set size and availability of computational resources (memory and processor). What is trivial in terms of processing power for the NSA, could be prohibitive for the average company.

Efficiency:
Algorithmic efficiency is the measure of how well a routine can perform a computational task. One analogy for algorithmic efficiency and its dependence on hardware (memory capacity and processor speed) is the task of moving a ton of bricks from point A to point B a mile a way.  If you use a Lamborghini for this job (small storage but fast acceleration), you will be able to move a small amount of bricks very quickly, but the down side is that you will have to repeat the trip multiple times. On the other hand, if you use a flatbed truck (large storage but slow acceleration) you will be able to complete the entire project in a single run, albeit at slower pace.

Notation:
The expression for algorithmic efficiency is commonly referred to as “Big O” notation.  This is a mathematical representation of how the algorithm grows over time. When plotted as a function, algorithms will remain flat, grow steadily over time, or follow varying curves.

The Pessimistic Nature of Algorithms:
In the world of algorithm analysis, we always assume the worst case scenario.  For example, if you have an unsorted list of unique numbers and it’s going to take your routine an hour to go through it, then it is possible in the best case scenario that you could find your value on the first try (taking only a minute). But following the worst case scenario theory, your number could end up being the last one in the list (taking you the full 60 minutes to find it). When we look at efficiency, it’s necessary to assume the worst case scenario.

 Yaacov Apelbaum-big-o Plot
Image 1: Sample Performance Plots of Various Algorithms

O(1)
Performance is constant for time (processor utilization) or space (memory utilization) regardless of the size of the data set size. When viewed on a graph, these functions show no-growth curve and remain flat.

O(1) algorithm’s performance is also independent of the size of the data set on which it operates.

An example of this algorithm is testing a value of a variable based on some pre defined hash table.  The single lookup involved in this operation eliminates any growth curves.

O(n)
Performance will grow linearly and in direct proportion to the size of the input data set.  The algorithm’s performance is directly related to the size of the data set processed. 

O(2N) or O(10 + 5N) denote that some specific business logic has been blended with the implementation (which should be avoided if possible).

O(N+M) is another way of saying that two data sets are involved, and that their combined size determines performance.

An example of this algorithm is finding an item in an unsorted list or a Linear Search that goes down a list, one item at a time, without jumping.  The time taken to search the list gets bigger at the same rate as the list does.

O(nn)
Performance will be directly proportional to the square of the size of the input data set.  This happens when the algorithm processes each element of a set and that processing requires another pass through the set (this is the square value). Processing a lot of inner loops will also result in the form O(N3), O(N4), O(Nn.).

Examples of this type of algorithm are Bubble Sort, Shell Sort, Quicksort, Selection Sort or Insertion Sort.

O(2N)
Processing growth (data set size and time) will double with each additional element of the input data set. The execution time of O(2N) can grow exponentially.

The 2 indicates that time or memory doubles for each new element in data set.  In reality, these types of algorithms do not scale well unless you have a lot of fancy hardware.

O(log n) and O(n log n) 
Processing is iterative and growth curves peak at the beginning of the execution and then slowly tapper off as the size of the data sets increases.  For example, if a data set contains 10 items, it will take one second to complete; if the data set contains 100 items, it will takes two seconds; if the data set containing 1000 items, it will take three seconds, and so on. Doubling the size of the input data set has little effect on its growth because after each iteration the data set will be halved. This makes O(log n) algorithms very efficient when dealing with large data sets.

Generally, log N implies log2N, which refers to the number of times you can partition a data set in half, then partition the halves, and so on.  For example, for a data set with 1024 elements, you would perform 10 lookups (log21024 = 10) before either finding your value or running out of data.

Lookup # Initial Dataset New Dataset
1 1024 512
2 512 256
3 256 128
4 128 64
5 64 32
6 32 16
7 16 8
8 8 4
9 4 2
10 2 1

A good illustration of this principal can be found in the Binary Search, it works by selecting the middle element of the data set and comparing it against the desired value to see if it matches. If the target value is higher than the value of the selected element, it will select the upper half of the data set and perform the comparison again. If the target value is lower than the value of the selected element, it will perform the operation against the lower half of the data set. The algorithm will continue to halve the data set with each search iteration until it finds the desired value or until it exhausts the data set.

The important thing to note about log2N type algorithms is that they grow slowly. Doubling N has a minor effect on its performance and the logarithmic curves flatten out smoothly.

An example of these type of algorithms are Binary Search, Heap sort, Quicksort, or Merge Sort

Scalability and Efficiency
An O(1) algorithm scales better than an O(log N),
which scales better than an O(N),
which scales better than an O(N log N),
which scales better than an O(N2),
which scales better than an O(2N).

Scalability does not equal efficiency. A well-coded, O(N2) algorithm can outperform a poorly-coded O(N log N) algorithm, but this is only true for certain data set sizes and processing time. At one point, the performance curves of the two algorithms will cross and their efficiency will reverse.

What to Watch for when Choosing an Algorithm
The most common mistake when choosing an algorithm is the belief that an algorithm that was used successfully on a small data set will scale effectively to large data sets (factor 10x, 100x, etc.).

For most given situations, an O(N2) algorithm like Bubble Sort will work well. If you switch to a more complex O(N log N) algorithm like Quicksort you are likely to spend a long time refactoring your code and will only realize marginal performance gains.

More Resources
For a great illustration of various sorting algorithms in live action form, check out David R. Martin’s animated demo.  For more informal coverage of algorithms, check out Donald Knuth’s epic publication on the subject The Art of Computer Programming, Volumes 1-4.

If you are looking for some entertainment while learning the subject, check out AlgoRythimic’s series on sorting through dancing.

 

© Copyright 2011 Yaacov Apelbaum All Rights Reserved.

Scaling the Wall

Yaacov Apelbaum-Climbing

Eagerly beginning the wall to scale,
Using only my hands and feet.
Resolved to follow the hardest trail,
I confidently place my cleat.

Suddenly, there’s no foothold to rest,
Desperately, I cling to the wall.
My heart is pounding in my chest,
My ascent slows to a crawl,

My feet and arms tire and shake,
The safety line invites me to bail.
Should I reach for the line and forego the ache,
Or continue to try, maybe fail?

The voice from below says: “Look to the right”,
I reach and grab a far hold.
Propelling free from my previous plight,
Good advice is more precious than gold.

It’s romantic to view the world as a wall,
Scaled heroically by pure self-esteem.
But in complex endeavors you’re certain to fall,
Without the support of a team.

 

© Copyright 2011 Yaacov Apelbaum All Rights Reserved.

The Startup Leap to Success

Yaacov Apelbaum-The Startup Product Leap

One of the most challenging periods for any startup is passing through the “Valley of Death”. During this delicate phase, the organization’s burn rate is high and it has to rapidly achieve the following three goals:

  1. Move from a proof of concept (POC) to a functional commercial product
  2. Reach a cash flow break even
  3. Transition form seed\angel funding to venture capital funding

For startups focusing on the development of SaaS products, this phase also marks an important millstone in the maturity of their product. With increased volume of production users comes stricter SLA’s and the need to implement more advanced operational ability in areas such as: change control, build automation, configuration management, monitoring and data security.

Yaacov Apelbaum-Startup Financing Cycle

If you are managing the technology organization in an early stage startup, you have every reason to be concerned. To the outsider, the success and failure of startups often seems to be shrouded in mystery–part luck part black magic.  But ask a seasoned professional who has successfully gone through the startup meat grinder and he will tell you that success has nothing to do with luck, spells, or incantations.

Having worked with a number of startups, I have come to conclude that the most common reasons for product failure (beyond just not being able to build a viable POC) is the inability to control your product’s stability and scalability.

In the words of Ecclesiastes, there is a time and purpose for everything under heaven.  In the early stages of a startup’s life cycle,  process is negotiable.  Too much process may hinder the speed in which you can build a functional POC.  In later stages, reliable process and procedures (e.g. requirements, QA, unit testing, documentation, build automation, etc., ) are critical. They are the very foundations of any commercial grade product.  Poor quality and performance are self evident and no matter how much marketing spin and management coercion you use, if you are trying to secure an early stage VC funding round, your problems will rapidly surface during the due diligence process.

To avoid the startup blues, keep your eyes on the following areas. Factoring them into your deployment will help you deliver on time and on budget, with the proper scalability and highest quality possible.

Design Artifacts
Before converting your POC to a functional product, take the time to design your core components (i.e. CRM, CMS, DB access, security, API, etc.).  Create a high level design that identifies all major subsystems.  Once you know what they are, zoom into each subsystem and provide a low level design for each these as well.

  • Resist the temptation to code core functionality before you have a solid and approved scalable architecture (and the documentation for it). 
  • Let your team review and freely comment about the proposed platform architecture and deployment topology.  Just because a vocal team member has religious technology preferences doesn’t mean that everyone should convert.
  • No matter how good your technical staff is, when it comes to building complex core functionality (transaction engine, web services API, etc,) don’t give any single individual carte blanche to work in isolation without presenting their design to the entire team.
  • Document the product as you develop it. Building a complex piece of software without accurate documentation is akin to trying to operate a commercial jet without its flight manual.
  • To help spread the information and knowledge, establish a company-wide document depository (like a Wiki or SharePoint ) and store all your development and design documents under version control.  Discourage anyone from keeping independent runaway documents of the system.
  • Maintain an official (and versioned) folder for the platform documentation showing product structure and components, development roadmaps, and technical marketing materials. 

Testing and QA
If you are not writing unit tests you have no way to verify your product’s quality. Relying on QA to find your bugs means that by the time you do (if ever!) it will be too late and expensive to fix them.  Spend a little extra time and write unit tests for every line of code you deploy in production.  When refactoring old code, update the original unit test as well.

Just like most things in life, bugs have a lifecycle, they are born, they live and die.  Effectively tracking them as part of your build and QA process is a prerequisite for their timely resolution.  

If you are discovering a high critical bug count in your “code complete release” (half a percent of source code e.g. 500 bugs for a 100,000 line code base), you may not be production ready.  Stop further deployment and conduct a thorough root cause analysis to understand why you have so many issues. 

If your bug opening/closure rate remains steady (i.e. QA is opening bugs at the same rate development is closing them) and you have reoccurring bug bounces, you may need to reassess the competency of your development resources. This would also be a good time to have a serious heart to heart conversation with the developers responsible for the bugs. Be prepared for some tough HR decisions.

Monitoring and Verification
Just like you wouldn’t drive a car without a functional dashboard, you can’t run quality commercial software without real time visibility into its moving parts.  Implement a monitoring dashboard to track items such as daily builds (and breaks), servers performance, users transactions, DB table space, etc. 

Seeing is believing. Products like Splunk can help you aggregate your operational data.  Once you have this information, show it to your entire team. I personally like to pump it onto a large screen monitor in the development areas so everyone can get a glimpse.

Yaacov Apelbaum-Splunk Monitoring
Image 1: Splunk Dashboard in Action

Security, Scalability and Operations
Unless you are in the snake oil sales business, build your production environment from the get-go for scalability, security, and redundancy.  Don’t look for “bargains” on these technologies, leverage commercial-grade load balancers, firewalls, and backup solutions.

Your production environment is critical to your success, so don’t treat it as a second class citizen or try to manage it with part time resources. As you will quickly discover, a dedicated sys admin and a DBA who know your platform intimately are worth their weight in gold.

You must achieve operational capabilities in build automation, release management, bug tracking, and configuration management before going live.  If you don’t, be prepared to spend most of your productive time fixing boo-boos in the wee hours of the night.

Implementing many of the above mentioned measures will give you a significant tactical advantage as well as a strategic boost when negotiating with potential VCs.  Having these capabilities on your utility belt will also help you calmly face any future issues as your startup matures.

 

© Copyright 2011 Yaacov Apelbaum All Rights Reserved.

Descend, ye Cedars, Haste ye Pines

Yaacov Apelbaum-Solomon Temple

After much procrastination, I’ve finally taken the plunge and digitized our CD collection. It was a colossal, multi-month project but now, hundreds of hours of streaming music later, I got the opportunity to reevaluate Bach and Handel, two of my favorite composers.

Bach and Handel share some interesting history. They were born only 4 weeks apart (Bach 31 March 1685 – Handel 23 February 1685), grew up 60 miles from each other, used the same snake oil salesman eye surgeon (John Taylor), and even passed on the opportunity to marry Buxtehude’s daughter Anna Margareta.  Despite their parallel lives, each eventually developed a distinctive musical style and while both had strong religious convictions, Bach raised a large family (20 children), Handel remained a bachelor.

Yaacov Apelbaum-BachFor me, Bach’s music is a pure intellectual experience. I find his work to have an almost algorithmic quality.  With a few descending organ notes in the Toccata and Fugue in D Minor, Bach rips the universe wide open revealing G-d’s mathematical handiwork everywhere.

Yaacov Apelbaum-HandelHandel, on the other hand, mounts a direct assault on your emotions. He first floats the theme, and then in repeating iterations he drives it in (almost all of his oratorios follow this MO). Never verbose, he creates the ultimate expression of the human kinship and longing for the divine through minimalist orchestration.

As for artistic evolution, Bach’s style remained more or less constant throughout his career and he showed little or no interest in new musical innovations (he rejected the piano forte because it sounded too mellow and was limited in its expressiveness as compared to the harpsichord). Handel, on the other hand, was a great experimenter and his style evolved throughout his career. He wrote Esther almost a decade before it was performed, but then shelved it because he realized that the audience wasn’t ready for it. It is noteworthy that in the end, it was Handel—the undisputed master of the Italian opera—who eventually did away with this pompous and pretentious genre and replaced it with clean and concise style of the oratorio.

One example of how Handel uses simple orchestration and words as an effective substitute to the contemporary Broadway mega operas can be found in the closing part of Esther. Handel, dedicates over eleven minutes to a choral tour de force discussing the rebuilding of the Temple in Jerusalem.  This finale is only made-up of 8 lines of text with trumpet accompaniment, a simple chorus line, and dueling basses, but the effect is breathtaking.

Yaacov Apelbaum-Solomon's Temple

Chorus
For ever bless’d be thy holy name,
Let Heav’n and earth his praise proclaim.

The Lord his people shall restore,
And we in Salem shall adore.

Mount Lebanon his firs resigns,
Descend, ye Cedars, haste ye Pines
To build the temple of the Lord,
For G-d his people has restor’d.

No siree!  They don’t write music like that anymore.

© Copyright 2010 Yaacov Apelbaum All Rights Reserved

Crafting Great Software Features Part-2

Yaacov Apelbaum-Sleep Master
The Sleep Master 7000SX: It captures and Tweets all your sleep stats while you snooze!

In his book, “The Diamond Age,” Neal Stephenson classifies technologists as belonging to one of two categories: (1) those who hone existing ones and (2) those who forge and create new ones.

There is a fundamental difference between how ad hoc assemblers and software crafters approach building a product. Ad hoc assemblers tend to start with the technology and the solutions it offers.  They speak in terms of using a framework, language, or protocol to solve a problem.  They frequently make statements like “the next version of X will solve the problem of Y”. Ad hoc assemblers who frequently suffer from myopic vision will solve customer needs first by boxing some existing technologies together and then by shoehorning a GUI on top of it.  The sum of the feature and functionality will be driven entirely by the framework of the underlying technology.

This almost always results in marginal user experience and product performance. The solution isn’t designed for ease of use. It’s not even intended to solve any concrete problem. Its primary purpose is to act as a vehicle for marketing hype and sales. You can recognize these products by the emphasis they places on mile long lists of features, most of which are poorly implemented and are of little commercial value.

It’s a simple value proposition: it is more important for some neat technologies to be shipped (the latest buzz is cloud computing and social networks)  than for products to be useful. Very few great products are designed this way.

Software crafters, on the other hand, understands that real people will consume their product, and every decision about its design is made not only with a specific user in mind but also the specific problems the user needs to solve in mind. This and only this drives the choice of platform, language, communication protocol or database.

Your customers are no different than the people who are looking to buy a specific tool for a job. To deliver the right product functionality without getting lost in the technology jungle, you need to develop an understanding of how successful products are developed in other fields.

Manufacturers of tools and appliances all go through the same steps in balancing technology, requirements and usability. You can learn a lot from the successes of products like the IPod by recognizing that when we buy a product, we almost never care about unnecessary “fluff” features (like a social network enabled timer that can capture 4 different types of sleep statistics).  Rather, what we want is what provides valuable features (services) and perform them well.

 

© Copyright 2010 Yaacov Apelbaum All Rights Reserved.

Ode to The Code Monkey

 Yaacov Apelbaum-Code Monkey  
The Code Monkey (inspired by A Dream Within A Dream by Edgar Allan Poe)

Take another slap upon the cheek,
While slaving on this project, week by week. 
You have been wrong to work so hard,
Expecting riches and managerial regard.
Grinding out functions awake and in a dream,
Will not fetch rewards or professional esteem.

What you lack are not more lines of code, 
Rather it’s architecture and a road.
To substitute quality with speed,
Is the motto of the code monkey creed. 
You who seek salvation in RAD extreme
Will find, alas, a dream within a dream.

If you examine your latest stable build,
You will notice many bugs that haven’t been killed.
Strangely, they seem to grow in relation,
To your oversized code base inflation.
So many new features! How did they creep? 
Through scope expansion, they trickle deep.

Building good software is hard to manifest,
If you fail the requirements to first digest.
The lesson to learn from this development ditty,
Is that no matter how clever you are or witty,
If you fudge the schedule and estimation phase,
There is but one reward for you. The death march malaise!

© Copyright 2010 Yaacov Apelbaum All Rights Reserved.

Ripping Off Google

Yaacov Apelbaum-Google Add Con

My wife is a potter. She conducts most of her glazedOver pottery business on-line. Over the past 2 years, she has incrementally leveraged social networks to supplement her regular marketing and advertising efforts and she has progressively built-up a large following of loyal customers and a network of peer artists. She will tell you that without a doubt, a focused Internet advertising campaign translates instantly to higher site traffic and sales.

glazedOver Pottery-Coffee MugClearly, an important component in successfully operating a small on-line craft business is to leverage social and professional networks and to tactfully promote your product. One way to do this is by paying a service to expose your store. Another, more organic method, is to form a guild that promotes the interests of a group of related artists via blogs and other publications. High traffic sites like these typically contain interviews, product reviews, giveaways, and links to member shops.

The Internet barons the likes of Google and Microsoft are aware of the relationship between traffic and revenue, and so they court high volume sites to host advertising content.  One of the most popular on-line money making schemes (eclipsed only by Nigerian get rich quick 4XX offers) is the Google AdSense program. With programs like AdSense, you place sponsored advertisements on your blog and Google then delivers specialized content based on your site classification. The premise of this model is that if you have a high traffic site, you will most likely generate product or service sales for the ad sponsor. The more clicks, the more you make.

Google obviously requires that the sponsor of the AdSense campaign operates a legitimate website or blog. Their definition of what is deceptive or manipulative behavior is quite specific as you can see from their guidelines:

Quality guidelines
Make pages primarily for users, not for search engines. Don’t deceive your users or present different content to search engines than you display to users, which is commonly referred to as “cloaking.”

Avoid tricks intended to improve search engine rankings. A good rule of thumb is whether you’d feel comfortable explaining what you’ve done to a website that competes with you. Another useful test is to ask, “Does this help my users? Would I do this if search engines didn’t exist?”

  • Avoid hidden text or hidden links.
  • Don’t use cloaking or sneaky redirects.
  • Don’t send automated queries to Google.
  • Don’t load pages with irrelevant keywords.
  • Don’t create multiple pages, subdomains, or domains with substantially duplicate content.
  • Avoid “doorway” pages created just for search engines, or other “cookie cutter” approaches such as affiliate programs with little or no original content.

If your site participates in an affiliate program, make sure that your site adds value. Provide unique and relevant content that gives users a reason to visit your site first.

Hosting Google adware has both its fans and its critics. Some users abstain from the practice on the grounds that it cheapens and waters down their brand (akin to placing a 30 foot billboard on your Victorian mansion), but many other popular blogs and websites do it enthusiastically, and they make some decent $$$ in the process.

It seems that if necessity is the mother of invention, then revenue from high internet traffic is the father of the con. Site sponsored advertising practice has now become so popular that many enterprising individuals/organizations are running large campaigns for site scams know as MFA (made for AdSense). These scraper sites are siphoning millions of dollars from the likes of Google.

The scam is ingenious and requires dedicated resources and some technical skill (like purchasing domains and manipulating content). I discovered this several days ago after my wife told me that someone was showcasing her pottery work on their site without crediting her. She first came upon it when she noticed an interesting pottery link in her twitter feed and asked me to have a look. After clicking on the link, I was routed to a site called VisionPottery.com.  At first, the site looked  legit; just another average blog dedicated to hand crafted goods.

Yaacov Apelbaum-Marla TwittVisionPottery.com-Hand Made Pottery Bowls VisionPottery.com-Error Page   VisionPottery.com Domain Information   Beverly Butler Emerald Enterprises LLC
Twitter feed, article, other site pages, and domain ownership information

The blog was designed reasonably well. The cover article titled “Folk Art Craft-From the past” featured a set of my wife’s pottery bowls. I scanned the article for a link to her shop (assuming that the author used her work as an illustration), but found neither links or credits.

Yaacov Apelbaum-Caroline JonesWhen I checked the properties of the actual image, I was surprised to discover that it was hosted on the server and not linked to her site in any way (clearly, a copyright violation). I figured that the next best thing would be to read the article more carefully. The essay turned out to be laced with numerous grammatical errors and its contents made little sense.

Massive grammatical incoherencies smack of either human or machine altered text, so I performed a quick on-line search and located the original essay in “Articlebase.com”.

I diffed both essays and confirmed that the article hosted on VisionPottery.com was in fact a plagiarized version.

A textual analysis of the content revealed that the changes were purely based on a simple word substitution technique where one word, for example America is replaced by United States.  It is clear that the plagiarizer’s objective was not to ‘lift’ the ideas from the article. Rather it was an attempted to prevent search engines from identifying and tagging the content as duplicate and thus improve their SEO (search engine optimization). This was also confirmed by the fact that the name of the original author could be found at the bottom of the plagiarized text.

An examination of the site structure reveled that it was built with a combination of machine generated scripts (many still contained the default WordPress template settings) and manual customization (logos and UI elements). The contents on the other hand, was managed by human ‘adaptors’ who took existing materials and resources from various on-line locations and altered them to create the appearance of an original composition, all for the sole purpose of scoring better search engine visibility.

Yaacov Apelbaum-Hunt Mallard Cove.comChecking the VisionPottery.com domain registration shed some additional light on its modus operandi. The site is registered to Beverly Butler of Emerald Enterprise LLC; Beverly proudly advertises herself as the owner of the same on LinkedIn. As it happens, the server hosting her VisionPottery site also hosts many other parasitic marketing sites that operate along the same lines. Interestingly, the plagiarized version of the essay text where my wife’s bowls were found was also used verbatim by several other sites registered to different owners that were hosted on this machine as well.

Yaacov Apelbaum-AdSense Ready Sites A quick estimate (based on a sampling of the domains hosted on one server) suggests that there are potentially tens of thousands of sites that engage in this type of activity each making upwards of $150 a month. Clearly, this is a well coordinated and thriving criminal enterprise.  It also turns out that there are hundreds of thriving franchises that for as low as 79.95 will provide you with ten ready AdSense sites (you also get a starter kit, a centralized dashboard to manage your growing Internet empire, and even a spamming pipeline into relevant Twitter feeds).  A major sales pitch for these offer is the promise of “Passive-Residual” income which is defined by one developer of such sites as:

“… a steady stream of income that you have to do nothing at all to maintain, once you have established it. Passive-Residual Income is the ONLY income that gives you the freedom to come and go as you please, on your own schedule, while working at home or in your spare time.”

If you think that this is business as usual on the lawless Internet, think again. This type of conduct severely impacts all of us, from content creators who’s work is stolen and diluted, to service providers like Google who lose millions in revenue and all the way down to the average end user who gets spammed.

And yes, VisionPottery.com does have a copyright notice at the bottom of their web page, after all, they are only trying to protect their IP from other unscrupulous marketing entrepreneurs. Can you blame them?

 

© Copyright 2010 Yaacov Apelbaum All Rights Reserved.

Designed for Humans

Yaacov Apelbaum-Designed for Humans

In my previous life, I was a civil engineer. I worked for a large power marine construction company doing structural design and field engineering. The work assignments were pretty interesting. I got to blow up a bridge, salvage a sunken vessels, and build a lot of interesting marine structures.  On one of my projects, I was given the responsibility to design a set of beds for pre-stressed concrete piles.  The design challenges in this project were significant. We had limited real-estate and the loads involved were higher than any previously attempted.

Yaacov Apelbaum-Prestressed Concrete Piles Beds for pre-stressed concrete have massive anchors on each end. Between them steel forms are placed and steel cables are strung.  The cables are first tensioned, then the concrete is poured into the form.  When the concrete hardens, you cap the cables and cut them.  The result is a pile or a girder that is significantly resistant to various loads.

Following best engineering practices, I completed a structural load analysis document, a set of production blueprints with full dimensional drawings, welding, coating and assembly instructions, a bill of materials, and even a balsa scale model to help the manufacturing facility to visualize my design. Yaacov Apelbaum-Prestress Bed Scale Model

I was proud of my hard work and I felt that it was a great achievement.  The day before the presentation, I went over all the calculations again and rehearsed my slides.  After one last sleepless night, I arrived to the conference room to find several structural engineers, the yard superintendent, a number of field engineers from several divisions, and the chief engineer from corporate, an elderly white haired gentleman in his mid-sixties.  I remember feeling confident in my ability to sell my design to them.

The entire presentation went off without a glitch.  There were some stylistic comments but the overall feedback was good.  After the presentation, the chief engineer stopped by, shook my hand, and said that he liked my design very much.  Then with a straight face, he told me that he expected to see two additional alternative designs before we finalize our decision.

I was speechless.  “I’m not sure I understand, sir” I said. “Didn’t you just say that you liked the design?” I pointed out that none of the participants had found any flaws in my proposal.  “Why”, I asked, “did you think we need to develop two additional designs?”

He paused for a moment, and then said, “You never know what the best idea is unless you compare several good ones side by side.” I nodded politely, but I was disappointed. I felt like this was probably some form of engineering hazing. Was it truly the case that it’s impossible to achieve reasonable quality on a first try? I didn’t really understand how valuable his advice was until years later.

Yaacov Apelbaum-Pre-Stressed Concrete Bed

Completed pre-stressed concrete beds

Fast forward several years. I switched from civil engineering to software development.  At the time I was working as a lead front-end designer.  One of our key customers hired us to migrate a large VC++ client to a browser application.  In the mid-nineties, rich browser based clients were relatively unheard of.  We were stumped. Problems like session security, persistence, and lack of basic GUI controls seemed Insurmountable.

During meetings, I would regularly sketch various GUI solutions.  But I often found that as soon as I came up with a solution, a new set of problems would be exposed and a redesign would be necessary. In retrospect, most of the ideas I came up with at the time were sub-par. But with each design, no matter how bad, another potential solution was discovered.  Each new design I sketched out was closer to the solution than its predecessor. Even the poor designs peeled away some layers that obstructed the problem that I didn’t initially see.

After dozens of attempts, I had an epiphany and came up with one design that it was possible to implement in several ways. Sketching and contemplating the various designs helped me tremendously, but when the time came to present my solution, I made a tactical mistake. I deliberately neglected to show all of the other working ideas for fear that they would think that I was a mediocre designer; why else did I need to work so hard on so many designs just to yield one single decent one.

I realized in retrospect that there would have been any number of acceptable designs and by not presenting some other ideas I considered before arriving at the one I chose, I short changed myself. If anybody had suggested one of the other options I had discarded but not mentioned, I would have had to explain that I had already discarded that idea. But at that point , it would jeopardize my credibility because it would look as if I was only trying to brush them off.

 Yaacov Apelbaum-Poor Design   Yaacov Apelbaum-Quality Design M1917

Multiple product designs

After participating in and leading many painful design meetings, I have come to the realization that the best way to sell the top design idea is to first share some of the alternative and inferior designs.

If you are responsible for usability or user interface design, you have to develop at least several alternative options for credibility purposes. By that I don’t mean that you should become a cynic and create duds just for the sake of generating volume. The alternate ideas have to represent meaningful and functional choices.

Once you have your alternates worked out, walk through the various options during your design meeting and call out what the pros and cons are for each and what the overall solution trade-offs would be. When discussing designs, place emphasis on both the positive and negative qualities of each alternative.  This will help your peers view you as an unbiased and professional presenter.  Also, you may find that when you present your top candidates, your team will come up with some hybrid solutions that otherwise would have been impossible to generate if you had only presented a single one.

Nowadays, I am often tasked with working on problems that are exceptionally difficult to overcome (with respect to both technology and schedule) and the typical, off the shelf solution is just not sufficient. But there is hope. Usually after a few days of intense interterm deliberations complete with often heated exchanges of alternate designs, magic happens.

My secret sauce for breaking down the most difficult design problems consists of the following steps:

  • Get your entire team into a conference room, order plenty of pizza and write down all possible solutions on the whiteboard. Make sure that everyone offers an opinion. Don’t make any go-no-go decisions during your first meeting; rather leave the information on the board for several days (don’t forget to mark it as ‘do not delete’)  and schedule a follow-up meeting. Tell everyone to document the pros an cons list for each option and provide specific use cases.
  • Get your team into a conference room a second time, order plenty of pizza and write down the pros and cons list for each choice.  Boil down your choices to the top three candidates.
  • Work out the feasibility of each of the top three candidates and cast a vote for the best one.  This is the time to establish consensus and a team buy-in.

Way back when the chief engineer asked me to come up with two additional alternate designs, he was in fact telling me that no matter how talented a person is, there is tremendous value in variety.  He was also saying, that in order to come up with a ‘good’ design there must first be several inferior ones. If you are responsible for the design of any product futures, you will want to encourage your team to flesh out the bad designs on the whiteboard or as POC, not in your final product.  Unfortunately, the only way to achieve this is by expending resources and time exploring several possible solutions, up to and including some unattractive ones.

A common development folly (see It’s Good Enough for me) is the notion that there is in fact a ‘best’ solution or one right answer to a given problem.  Actually, the opposite is true. Considering time and resources, in most cases, the ‘best’ possible solution isn’t worth the effort and a ‘good’ solution would more than suffice.

If you are curious abut which design I ended up using for the prestress pile beds, it was the third one.  It turns out that unexpectedly, after I reconsidered the problem again, I realized that due to the yard’s location at sea level, the water table was too high to accommodate my initial proposal. As a result, my updated design required various modifications in order to solve this problem.

Live, design and prosper.

 

© Copyright 2010 Yaacov Apelbaum All Rights Reserved.

Crafting Great Software Features Part-1

Yaacov Apelbaum-Useless Technology  
Driver’s Entertainment System and Password Protected Gear Shifter

Trying to do anything well is difficult. Developing useful features is no different. It takes more effort to create useful functionality than to produce eye candy.

Good feature design comes from a reliable and repeatable process (not dissimilar from CMM). Unfortunately, many organizations still have not figured this out and instead of investing in usability engineering, they slug it out in isolation and tragically, like Sisyphus, doom themselves to getting the wrong functionality and spend eternity (or until the project is canceled or runs out of money) patching their mistakes, version after version.

If your goal is to build a useful product, you should schedule your project so you can develop the needed functionality. Habitually excusing yourself by claiming that you “just don’t have the time” to develop quality is a cop-out. If you find that your team is spending a significant amount of time on feature seesaws (taking them out-putting them back), it means that you’re not planning well, and that the technical goals are not aligned with the product objectives.

Feature Framework
In a functional development organization, everyone should fully understands how their individual contributions will impact the end user. To help achieve this, there should be a feature framework in place.  The feature framework  must cross organizational lines and bridge development, PM, product planning, sales, and marketing.  The chief purpose of the framework is to determine what challenges end-users have and how a proposed feature, functionality, or technology will solve those problems. Without utilizing some degree of Feature Driven Development, you stand the risk of creating features that may look great (like password protected gear shifter) but solve no real problems.

Management Knows Best, Well, not Always…
In order for you to master the product usability question, you have to conduct real life tests with real would-be users. Talk to your customers about their pain points and ask them what they love (and hate) about your product.  The earlier you get the bad news, the better you’ll be in the long run.

Keep in mind that most failures in software usability can be attributed to poor decisions at the executive level, which are promulgated due to a culture of silence. Developers and designers should be encouraged to think critically about their work and be provided with official channels for expressing their opinions (in a non polemic manner).

Make sure they talk to the the other members of your team to broaden their horizons.  Invite their participation in every feature conversation. When considering new or revolutionary UI changes, solicit input from your customers and your organization.  Without sufficient end-user, business justification, and sales input, its easy to develop functionality that universally disliked (like the notorious Microsoft Clippy feature).

One of the most common development failures (after underestimating feature complexity and the cost and time to develop it) is the inability to correctly define the problem space and instead to develop solutions that are looking for a problem.  If the project objective is vague and not granular enough, it’s impossible to know whether it has been solved or not. Furthermore, even if the objective is well defined, you may still be working with the wrong assumptions about what the customer really needs.

A video screen built into a steering wheel that plays looped safety movies won’t help your user drive any safer. These two types of problems—vague objectives and the wrong assumptions—have nothing to do with your team’s technical ability. If you can’t side step these kinds of issues, even the best software engineers and designers are bound to fail. You may write great functionality and create whooping UI, but if you can’t solve the right user problem, all of your hard work will be wasted.

Do you have a Problem with That?
The first step towards critical feature analysis is to take an objective view on the nature of the problem. As a developer, you are inherently biased towards the value of your features and suffer from a certain degree myopathy. You’re inside the tech bubble looking out and so you cannot possibly see your creations the way your users do.

Yaacov Apelbaum-Team Structure To improve visibility, you need to supplement your view with external sources. These should include: UI team, technical developers, product business champion, actual customers, sales, training, and marketing.  Get as many alternative views as you can. Again, make sure to talk directly with the users effected by the design, Don’t take single  camp’s word for what the problems are. Think of yourself as a newly elected congressman trying to keep your constituents happy. Don’t only consider the opinions of the power lobbyists.

Another challenge is that the way approach your customers for feedback will impact the type of information you get from them. If you unintentionally bias your questions, you’ll get skewed data. The art of observing and understanding customers needs is an acquired skill and it may take several trial runs before it’s honed.

When researching the problem, keep the following key questions in mind:

  1. Who are your user’s, what are their skills and job functions?
  2. What is your user’s workflow and how does your product enhance it?
  3. Do your users use other products to supplement your app?
  4. What business assumptions are you making about your users and their environment?
  5. What are your user’s specific deliverables (spreadsheets, reports, presentations, etc.)?
  6. How do your competitors solve the same problems you are working on?
  7. What is your user’s strategic information roadmap and how does your app fit into it?

If you don’t have complete responses to these questions, you cannot start to design or develop any features or functionality. a solid understanding of your customers and the answers to these questions form the foundation of your application.

You Only Get One Chance, So Chose Wisely
As you are preparing to work on your next version, you will discover that there is an infinite number of issues that need solutions and a mile long list of most have features. But just having a raw list of bugs and features isn’t enough to build a product.  As it goes in life, some problems are not worth solving once you consider their poor return on investment and time.

Often, a solution to one problem spawns multiple new problems (and bugs). You need to exercise good judgment, which means having the ability to distinguish what should be done from what can be done. After collecting the business requirements, the next step is the development roadmap. You have to synthesize the requirements and create a specific action plan for where to invest your time and resources.

With the data collected from customers and internal sources, distill the information into short one-sentence lists. These sentences should be written from the point of view of your end-user. For example, “Enable password field to accept 11 characters” is not a problem statement. But “Password field must support strong passwords” is. The difference is significant. You rarely want to define the solution and the problem in the same statement: If you do, you’ll miss the real problem.

In this example, there may be many other ways to solve the problem of password strength, including hard coding the logic to accept 11 characters. But if you are too focused, you’ll never see the alternatives (make the password length database driven, integrate it with LDAP, biometrics, etc.). Good feature development is all about understanding your alternatives.

Yaacov Apelbaum-problem to a solution

A solution looking for a problem

For each problem statement, provide supporting information. Include details about which users have the problem, how it was discovered, and what the potential workarounds are. Determine whether the problem only occurs in certain configurations. Following Feynman’s rule of scientific integrity, provide as much details as you can to allow others to confirm or challenge your assumptions.

If you’re the owner of the usability study or market research data make it available to everyone.  Don’t ask anyone to “trust you”. The more open you are about your feature sources, the less likely it is that people will suspect you.

Hold your Fire
Only start coding when you see “the whites of their eyes”.  When the goals are set and the problems to be solved have been well defined and understood, you can begin to build the features. Instead of adopting a top to bottom approach (where the development team is told what to do), engineers and UI designers should have free rein to generate the ideas that will solve the problems. Time should be allocated to investigate different alternatives that might provide the necessary functionality, and to run usability studies on prototypes to see if they actually improve the end-user experience. Only once you evaluate all your potential solutions and pick the best one can you engage in full speed development. The rest of your solutions do not have to be disregarded; you can shelve them for future releases.

As long as you are working within a feature framework, you are guaranteed to be marching in the right product direction. There should be a lot of innovative and creative juices flowing in your team, and even if you can’t completely solve all of your feature functionality challenges, a partial solution to the most important problem will still be superior to a perfect solution to the wrong problem.

 

© Copyright 2010 Yaacov Apelbaum All Rights Reserved.