Pack of Asses in Shangri-La

Yaacov Apelbaum - Asses in Shangri-La

A donkey pack in Shangri-La: The first dumb ass on left is the laziest, he slows down the pack because he is always looking for something to eat. The group of jackasses in the middle just stand there contemplating the concept that death is a cosmic opportunity. The big ass on the right is their enlightened guru. He imparts to the pack the consciousness that forms the foundation of their spirituality and growth.

In September 2011, while on photography assignment for a nature show called Frozen Planet to Wood Buffalo National Park in Alberta, Canada, Chadden Hunter and his team captured some imagery of a wolf pack hunting bison. Hunter provided the following description of the image:

Yaacov Apelbaum - Pack of Wolves 1

Chadden Hunter’s Original Wolf Pack Photograph

“A massive pack of 25 timberwolves hunting bison on the Arctic circle in northern Canada. In mid-winter in Wood Buffalo National Park temperatures hover around -40°C. The wolf pack, led by the alpha female, travel single-file through the deep snow to save energy. The size of the pack is a sign of how rich their prey base is during winter when the bison are more restricted by poor feeding and deep snow. The wolf packs in this National Park are the only wolves in the world that specialize in hunting bison ten times their size. They have grown to be the largest and most powerful wolves on earth.”

Now, forward the clock by 4 years to December 17, 2015, a user named Cesare Brai publishes a post on an Italian-language FB page. He uses Hunter’s original image but provides this alternate verbiage:

"Un pacco di lupi: i primi 3 sono i vecchi o gli ammalati, danno il passo all’intero pacco. Se fosse l’altro, essi sarebbero stati lasciati indietro, perdendo il contatto con il pacco. Essere sacrificati, poi vengono 5 forti, la prima linea, al centro sono i restanti membri del paccho, poi i 5 più forti seguendo: l’ultimo è solo, l’alfa, controlla tutto dal retro, in quella posizione può vedere tutto, decide la direzione, vede tutto il pacco, il paccho si muove secondo i tempi più anziani e si aiuta reciprocamente, si guardano a vicenda ".

Cesare Brai’s post is interesting, for the following reasons:

– From the post’s grammar it is clear that he is not a native Italian speaker
– Shortly after publication the post was taken down and Brai disables his FB account
– Cesare Brai has no internet presence beyond the wrong photo credit attribution

Three days later, on December 20, 2015, the Italian Facebook posting is translated into English and is posted again on FB by Barbara Hermel Bach. The translation appeared as follows:

"A wolf pack: the first 3 are the old or sick, they give the pace to the entire pack. If it was the other way round, they would be left behind, losing contact with the pack. In case of an ambush they would be sacrificed. Then come 5 strong ones, the front line. In the center are the rest of the pack members, then the 5 strongest following. Last is alone, the alpha. He controls everything from the rear. In that position he can see everything, decide the direction. He sees all of the pack. The pack moves according to the elders pace and help each other, watch each other."
Cesare Brai’s photo. — with Deb Barnes.

Ignoring for a moment the actual content of Bach’s posting, it is interesting to note that her verbiage is a reverse English translation of Cesare Bria’s Italian text which means that the text was most likely first written in English, then subsequently translated and posted in Italian under Brai’s name, and finally reposted in English under her name. 

In her post, she attributed the photo credits to the mysterious Cesare Brai. It is a noteworthy mistake because her collaborator on this post is one Deborah Barnes, a professional animal photographer who judging from her multiple website notices is very sensitive to issues of copyright infringements.

Yaacov Apelbaum - Deborah Barnes

Deborah Barnes’s About Webpage

Barbara Hermel Bach - Pack of Wolves

Barbara Hermel Bach Facebook Post

In terms of memetic engineering, the post was a hit! Within a few weeks, it went viral and has since garnered close to 486K views and over 237K shares. As you can see from just a few of the comments below, Bach’s new age wolf pack narrative clearly struck a chord with her audience:

Yaacov Apelbaum - Wolf Pack Comments

Content Adaptation by Management Consultants and Corporate Trainers 
By 2016, the wolf pack leadership concept in Bach’s FB post took the recruiters, management coaches, and efficiency consultants world by storm. Many of them embraced the idea and were thenceforth using the bogus narrative in their online publications.

Of special interest is the marking algorithm used by each of the republishes to re-brand the image and idea as theirs. As you can see from the few variations below, each one alters the original image by using a simple variation on color, geometric shape, and/or arrow orientation.

Yaacov Apelbaum - Pack of Wolves 2

Yaacov Apelbaum - Pack of Wolves 3

Yaacov Apelbaum - Pack of Wolves 4

Yaacov Apelbaum - Pack of Wolves 5

Yaacov Apelbaum - Pack of Wolves 6

Copycat variations on Bach’s Posting

So why all of the subterfuge, stratagems, and ruses? Why go through all of the trouble to hide Hunter’s name as the original photographer? Why alter the real location of the shot and go through all of the trouble of creating a sock puppet called Caesar Brai? And even now, why not just come out and either remove the original posting (which is a blatant copyright violation) or just state for the record that the narrative is false? After all, even Hunter, the photographer who took the original shot publically posted on his Twitter account that he was being ripped off by Bach:

Yaacov Apelbaum - Pack of Wolves 7

Hunter’s Image Piracy Tweet

It’s hard to answer these questions with certainty. We know from the posting that both Barnes and Bach contributed to it. Using writing style analysis (I’ve used (JStylo-Anonymouth) suggests that Bach wrote the verbiage. If that was the case then what was Barnes’ share? It is possible that as a professional animal photographer, she stumbled on Hunter’s original image and felt that she could repurpose it by attributing it to the fictitious Cesare Brai. As the “animal expert”, she could have also provided the “new age” insight into the wolf pack behavior.

By 2015, four years have passed since this image was originally seen on Frozen Planet and the chance that anyone would remember it would be slim. So the rational must have been that changing the name of the photographer and withholding the location of the shot would help add two additional layers of obscurity to the image.

What I find the most interesting about this and her other posts is that it required a significant amount of effort in terms of planning and execution and that her network produces large amounts of these type of materials on regular basis.

Considering that Bach is a liberal activist with an aggressive political agenda and a member of a large community of similar minded individuals who distribute such high grade social propaganda, it’s plausible that these publications are part of some kind of an organized political media production line.

Yaacov Apelbaum - Resistance   Yaacov Apelbaum - Resistance

Yaacov Apelbaum - Resistance   Yaacov Apelbaum - Resistance

Samples of Bach’s Social Action Content

Out of courtesy and to give Bach and Barnes the benefit of the doubt, I reached out to both of them to inquire about their sources of the image and verbiage. Alas, I have not received a response.

As far as the spiritual and uplifting content of Bach’s posting is concerned, there’s good news. Now you too can generate similar materials, and no, you don’t have to spend 7 lost years in Tibet on a soul searching journey. You can do so effortlessly with a few mouse clicks!

Just do as I did it with the “Pack of Asses in Shangri-La”. Pick a random animal pack image, go to the the inspirational BS Generator or Corporate BS Generator and in no time, you will be the leading ass who manages the pack from behind. Or as the BS generator would put it:

"You would be seamlessly innovating new backend leadership paradigms".

© Copyright 2017 Yaacov Apelbaum, All Rights Reserved.


Coincidence or Not?

Coincidence or not

You may have seen this motivational masterpiece. It’s a favorite among performance consultants. 

It goes as follows:


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26


11 14 15 23 12 5 4 7 5 96%


8 1 18 4 23 15 18 11 98%

Both are important, but fall just short of 100%


1 20 20 9 20 21 4 5 100%

So the moral of the story is that if you have the right attitude, you will achieve 100 percent of your potential. 

It sure looks great on paper. To test the mystical value of this proposition, I’ve written a short script to first create words that are between 2-12 character long that add up to the value of 100 and then find which of these is found in a dictionary. 

As might be expected, the script generated hundreds of valid words (see the short sample below just for the letter A). It turns out that many of them are not very motivational.

1 20 20 9 20 21 4 5 100%
B O Y C O T T  
1 20 20 9 20 21 4   100%

The problem with all of these leadership gimmicks is that they fail to understand the fundamentals of human performance, chiefly that nothing in nature functions at 100% efficiency. In actuality, anything that’s operational at the 70 percentile range is outstanding. 

Anyone with doubts should consult Frederick Brooks’ Mythical Man-Month.


Letter Values



1 + 2 + 18 + 15 + 7 + 1 + 20 + 9 + 22 + 5



1 + 3 + 18 + 15 + 13 + 5 + 7 + 1 + 12 + 25



1 + 6 + 6 + 5 + 3 + 20 + 1 + 20 + 9 + 15 + 14



1 + 12 + 9 + 14 + 5 + 1 + 20 + 9 + 15 + 14



1 + 14 + 3 + 8 + 15 + 18 + 9 + 20 + 9 + 3



1 + 14 + 7 + 12 + 15 + 16 + 8 + 15 + 2 + 9 + 1



1 + 14 + 15 + 18 + 3 + 8 + 9 + 19 + 13



1 + 18 + 25 + 1 + 14 + 9 + 19 + 13



1 + 19 + 2 + 5 + 19 + 20 + 15 + 19



© Copyright 2017 Yaacov Apelbaum, All Rights Reserved.

Only a Math Genius can Solve this Puzzle–Not Really!


Yaacov Apelbaum Sumerian mathematic tablet


One of the most popular math equation puzzles on social media is interesting because it doesn’t have one correct answer and it illustrates the nature of a solution divergence.

Here is an example.  The following two problems can be solved correctly regardless if we use sum of the digits in the product or product of the sum of digits methods:


But when it comes to the next set of 33×33=? each solution diverges and will yield two different results (see result table bellow for method 1 and 2).

For method 1 (sum of the digits in the product) it is: 33×33=18

33×33=1089 or 1+0+8+9= 18

For method 2 (product of the sum of digits) it is: 33×33=36

(3+3)x(3+3) = (6)x(6)=36


Here is a graphic solution for method 2

Yaacov Apelbaum If X and Y than Z

Here are the solution for the first 40 sets for each method.

Method 1

Method 2
11 11 121 4 11 11 4
22 22 484 16 22 22 16
33 33 1089 18 33 33 36
44 44 1936 19 44 44 64
55 55 3025 10 55 55 100
66 66 4356 18 66 66 144
77 77 5929 25 77 77 196
88 88 7744 22 88 88 256
99 99 9801 18 99 99 324
110 110 12100 4 110 110 400
121 121 14641 16 121 121 484
132 132 17424 18 132 132 576
143 143 20449 19 143 143 676
154 154 23716 19 154 154 784
165 165 27225 18 165 165 900
176 176 30976 25 176 176 1024
187 187 34969 31 187 187 1156
198 198 39204 18 198 198 1296
209 209 43681 22 209 209 1444
220 220 48400 16 220 220 1600
231 231 53361 18 231 231 1764
242 242 58564 28 242 242 1936
253 253 64009 19 253 253 2116
264 264 69696 36 264 264 2304
275 275 75625 25 275 275 2500
286 286 81796 31 286 286 2704
297 297 88209 27 297 297 2916
308 308 94864 31 308 308 3136
319 319 101761 16 319 319 3364
330 330 108900 18 330 330 3600
341 341 116281 19 341 341 3844
352 352 123904 19 352 352 4096
363 363 131769 27 363 363 4356
374 374 139876 34 374 374 4624
385 385 148225 22 385 385 4900
396 396 156816 27 396 396 5184
407 407 165649 31 407 407 5476
418 418 174724 25 418 418 5776
429 429 184041 18 429 429 6084
440 440 193600 19 440 440 6400



It is interesting to note the series growth patterns for each method.  Where in method 1, the values tend to cluster around a range of several values (see pattern for 30K solutions), in method 2 the growth is polynomial.


© Copyright 2017 Yaacov Apelbaum, All Rights Reserved.

Big O Notation

Yaacov Apelbaum-big-o-and-efficiency

Recently, I was chatting with a friend of mine about pre-acquisition due diligence.  Charlie O’Rourke is one of the most seasoned technical executives I know. He’s been doing hardcore technology for over 30 years and is one of the pivotal brains behind FDC’s multi-billion dollar payment processing platforms.  The conversation revolved around a method he uses for identifying processing bottlenecks.
His thesis statement was that in a world where you need to spend as little as you can on an acquisition and still turn profit quickly, problems of poor algorithmic implementations are “a good thing to have”, because they are relatively easy to identify and fix.  This is true, assuming that you have his grasp of large volume transactional systems and you are handy with complex algorithms.

In today’s environment of rapid system assembly via the mashing of frameworks and off-the shelf functionality like CRM or ERP, the mastery of data structures by younger developers is almost unheard of.

It’s true, most developers will probably never write an algorithm from scratch. But sooner or later, every coder will have to either implement or maintain a routine that has some algorithmic functionality. Unfortunately, when it comes to efficiency, you can’t afford to make uninformed decisions, as even the smallest error in choosing an algorithm can send your application screaming in agony to Valhalla.

So if you have been suffering from recursive algorithmic nightmares, or have never fully understood the concept of algorithmic efficiency, (or plan to interview for a position on my team), here is a short and concise primer on the subject.

First let’s start with definitions.

Best or Bust:
An important principal to remember when selecting algorithms is that there is no such thing as the “best algorithm” for all problems. Efficiency will vary with data set size and availability of computational resources (memory and processor).  What is trivial in terms of processing power for the NSA, could be prohibitive for the average Joe.

Algorithmic efficiency is the measure of how well a routine can perform a computational task. One analogy for algorithmic efficiency and its dependence on hardware (memory capacity and processor speed) is the task of moving a ton of bricks from one location to another a mile a way.  If you use a Lamborghini for this job (small storage but fast acceleration), you will be able to move a small amount of bricks very quickly, but the down side is that you will have to repeat the trip multiple times.  On the other hand, if you use a flatbed truck (large storage but slow acceleration) you will be able to complete the entire project in a single run, albeit at slower pace.

The expression for algorithmic efficiency is commonly referred to as “Big O” notation.  This is a mathematical representation of how the algorithm grows over time. When plotted as a function, algorithms will remain flat, grow steadily over time, or follow varying curves.

The Pessimistic Nature of Algorithms:
In the world of algorithm analysis, we always assume the worst case scenario.  For example, if you have an unsorted list of unique numbers and it’s going to take your routine an hour to go through it, then it is possible in the best case scenario that you could find your value on the first try (taking only a minute). But following the worst case scenario theory, your number could end up being the last one in the list (taking you the full 60 minutes to find it). When we look at efficiency, it’s necessary to assume the worst case scenario.

 Yaacov Apelbaum-big-o Plot
Image 1: Sample Performance Plots of Various Algorithms


Performance is constant for time (processor utilization) or space (memory utilization) regardless of the size of the data set size. When viewed on a graph, these functions show no-growth curve and remain flat.

O(1) algorithm’s performance is also independent of the size of the data set on which it operates.

An example of this algorithm is testing a value of a variable based on some pre defined hash table.  The single lookup involved in this operation eliminates any growth curves.

Performance will grow linearly and in direct proportion to the size of the input data set.  The algorithm’s performance is directly related to the size of the data set processed. 

O(2N) or O(10 + 5N) denote that some specific business logic has been blended with the implementation (which should be avoided if possible).

O(N+M) is another way of saying that two data sets are involved, and that their combined size determines performance.

An example of this algorithm is finding an item in an unsorted list or a Linear Search that goes down a list, one item at a time, without jumping.  The time taken to search the list gets bigger at the same rate as the list does.

Performance will be directly proportional to the square of the size of the input data set.  This happens when the algorithm processes each element of a set and that processing requires another pass through the set (this is the square value). Processing a lot of inner loops will also result in the form O(N3), O(N4), O(Nn.).

Examples of this type of algorithm are Bubble Sort, Shell Sort, Quicksort, Selection Sort or Insertion Sort.

Processing growth (data set size and time) will double with each additional element of the input data set. The execution time of O(2N) can grow exponentially.

The 2 indicates that time or memory doubles for each new element in data set.  In reality, these types of algorithms do not scale well unless you have a lot of fancy hardware.

O(log n) and O(n log n) 
Processing is iterative and growth curves peak at the beginning of the execution and then slowly tapper off as the size of the data sets increases.  For example, if a data set contains 10 items, it will take one second to complete; if the data set contains 100 items, it will takes two seconds; if the data set containing 1000 items, it will take three seconds, and so on. Doubling the size of the input data set has little effect on its growth because after each iteration the data set will be halved. This makes O(log n) algorithms very efficient when dealing with large data sets.

Generally, log N implies log2N, which refers to the number of times you can partition a data set in half, then partition the halves, and so on.  For example, for a data set with 1024 elements, you would perform 10 lookups (log21024 = 10) before either finding your value or running out of data.

Lookup # Initial Dataset New Dataset
1 1024 512
2 512 256
3 256 128
4 128 64
5 64 32
6 32 16
7 16 8
8 8 4
9 4 2
10 2 1

A good illustration of this principal can be found in the Binary Search, it works by selecting the middle element of the data set and comparing it against the desired value to see if it matches. If the target value is higher than the value of the selected element, it will select the upper half of the data set and perform the comparison again. If the target value is lower than the value of the selected element, it will perform the operation against the lower half of the data set. The algorithm will continue to halve the data set with each search iteration until it finds the desired value or until it exhausts the data set.

The important thing to note about log2N type algorithms is that they grow slowly. Doubling N has a minor effect on its performance and the logarithmic curves flatten out smoothly.

An example of these type of algorithms are Binary Search, Heap sort, Quicksort, or Merge Sort

Scalability and Efficiency
An O(1) algorithm scales better than an O(log N),
which scales better than an O(N),
which scales better than an O(N log N),
which scales better than an O(N2),
which scales better than an O(2N).

Scalability does not equal efficiency. A well-coded, O(N2) algorithm can outperform a poorly-coded O(N log N) algorithm, but this is only true for certain data set sizes and processing time. At one point, the performance curves of the two algorithms will cross and their efficiency will reverse.

What to Watch for when Choosing an Algorithm
The most common mistake when choosing an algorithm is the belief that an algorithm that was used successfully on a small data set will scale effectively to large data sets (factor 10x, 100x, etc.).

For most given situations, an O(N2) algorithm like Bubble Sort will work well. If you switch to a more complex O(N log N) algorithm like Quicksort you are likely to spend a long time refactoring your code and will only realize marginal performance gains.

More Resources
For a great illustration of various sorting algorithms in live action form, check out David R. Martin’s animated demo.  For more informal coverage of algorithms, check out Donald Knuth’s epic publication on the subject The Art of Computer Programming, Volumes 1-4.

If you are looking for some entertainment while learning the subject, check out AlgoRythimic’s series on sorting through dancing.



© Copyright 2011 Yaacov Apelbaum All Rights Reserved.