Archive

Archive for the ‘Algorithm’ Category

Coincidence or Not?

Coincidence or not

You may have seen this motivational masterpiece. It’s a favorite among performance consultants. 

It goes as follows:

IF

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26

THEN:

K N O W L E D G E
11 14 15 23 12 5 4 7 5 96%

AND:

H A R D W O R K
8 1 18 4 23 15 18 11 98%

Both are important, but fall just short of 100%

BUT…

A T T I T U D E
1 20 20 9 20 21 4 5 100%

So the moral of the story is that if you have the right attitude, you will achieve 100 percent of your potential. 

It sure looks great on paper. To test the mystical value of this proposition, I’ve written a short script to first create words that are between 2-12 character long that add up to the value of 100 and then find which of these can be found in a dictionary. 

As might be expected, the script generated hundreds of valid words (see the short sample below just for the letter A). It turns out that many of them are not very motivational.

A N E U R I S M
1 20 20 9 20 21 4 5 100%
B O Y C O T T  
1 20 20 9 20 21 4   100%

The problem with all of these leadership gimmicks is that that they fail to understand the fundamentals of human performance, chiefly that nothing in nature functions at 100% efficiency. In actuality, anything that’s operational at the 70 percentile range is outstanding. 

Anyone that doubts should consult Frederick Brooks’ Mythical Man-Month.

Word

Letter Values

Sum

Abrogative

1 + 2 + 18 + 15 + 7 + 1 + 20 + 9 + 22 + 5

100

Acromegaly

1 + 3 + 18 + 15 + 13 + 5 + 7 + 1 + 12 + 25

100

Affectation

1 + 6 + 6 + 5 + 3 + 20 + 1 + 20 + 9 + 15 + 14

100

Alienation

1 + 12 + 9 + 14 + 5 + 1 + 20 + 9 + 15 + 14

100

Anchoritic

1 + 14 + 3 + 8 + 15 + 18 + 9 + 20 + 9 + 3

100

Anglophobia

1 + 14 + 7 + 12 + 15 + 16 + 8 + 15 + 2 + 9 + 1

100

Anorchism

1 + 14 + 15 + 18 + 3 + 8 + 9 + 19 + 13

100

Aryanism

1 + 18 + 25 + 1 + 14 + 9 + 19 + 13

100

Asbestos

1 + 19 + 2 + 5 + 19 + 20 + 15 + 19

100

 

© Copyright 2017 Yaacov Apelbaum, All Rights Reserved.

Only a Math Genius can Solve this Puzzle–Not Really!

 

Yaacov Apelbaum Sumerian mathematic tablet

 

One of the most popular math equation puzzles on social media is interesting because it doesn’t have one correct answer and it illustrates the nature of a solution divergence.

Here is an example.  The following two problems can be solved correctly regardless if we use sum of the digits in the product or product of the sum of digits methods:

11×11=4
22×22=16

But when it comes to the next set of 33×33=? each solution diverges and will yield two different results (see result table bellow for method 1 and 2).

For method 1 (sum of the digits in the product) it is: 33×33=18

33×33=1089 or 1+0+8+9= 18

For method 2 (product of the sum of digits) it is: 33×33=36

(3+3)x(3+3) = (6)x(6)=36

 

Here is a graphic solution for method 2

Yaacov Apelbaum If X and Y than Z

Here are the solution for the first 40 sets for each method.

Method 1

Method 2
11 11 121 4 11 11 4
22 22 484 16 22 22 16
33 33 1089 18 33 33 36
44 44 1936 19 44 44 64
55 55 3025 10 55 55 100
66 66 4356 18 66 66 144
77 77 5929 25 77 77 196
88 88 7744 22 88 88 256
99 99 9801 18 99 99 324
110 110 12100 4 110 110 400
121 121 14641 16 121 121 484
132 132 17424 18 132 132 576
143 143 20449 19 143 143 676
154 154 23716 19 154 154 784
165 165 27225 18 165 165 900
176 176 30976 25 176 176 1024
187 187 34969 31 187 187 1156
198 198 39204 18 198 198 1296
209 209 43681 22 209 209 1444
220 220 48400 16 220 220 1600
231 231 53361 18 231 231 1764
242 242 58564 28 242 242 1936
253 253 64009 19 253 253 2116
264 264 69696 36 264 264 2304
275 275 75625 25 275 275 2500
286 286 81796 31 286 286 2704
297 297 88209 27 297 297 2916
308 308 94864 31 308 308 3136
319 319 101761 16 319 319 3364
330 330 108900 18 330 330 3600
341 341 116281 19 341 341 3844
352 352 123904 19 352 352 4096
363 363 131769 27 363 363 4356
374 374 139876 34 374 374 4624
385 385 148225 22 385 385 4900
396 396 156816 27 396 396 5184
407 407 165649 31 407 407 5476
418 418 174724 25 418 418 5776
429 429 184041 18 429 429 6084
440 440 193600 19 440 440 6400

image

imageimage

It is interesting to note the series growth patterns for each method.  Where in method 1, the values tend to cluster around a range of several values (see pattern for 30K solutions), in method 2 the growth is polynomial.

 

© Copyright 2017 Yaacov Apelbaum, All Rights Reserved.

How many four-sided figures appear in the diagram?

There are a number of these geometric combinometrics problems around.  Here is a complete graphic solution to the one of the more common ones.

Question: How many four-sided figures appear in the diagram below?

  • 10
  • 16
  • 22
  • 25
  • 28

Answer: 25

Yaacov Apelbaum - How many four sided figures

 

© Copyright 2017 Yaacov Apelbaum, All Rights Reserved.

Categories: Algorithm Tags:

Big O Notation

Yaacov Apelbaum-big-o-and-efficiency

Recently, I was chatting with a friend of mine about pre-acquisition due diligence.  Charlie O’Rourke is one of the most seasoned technical executives I know. He’s been doing hardcore technology for over 30 years and is one of the pivotal brains behind FDC’s multi-billion dollar payment processing platforms.  The conversation revolved around a method he uses for identifying processing bottlenecks.
 
His thesis statement was that in a world where you need to spend as little as you can on an acquisition and still turn profit quickly, problems of poor algorithmic implementations are “a good thing to have”, because they are relatively easy to identify and fix.  This is true, assuming that you have his grasp of large volume transactional systems and you are handy with complex algorithms.

In today’s environment of rapid system assembly via the mashing of frameworks and off-the shelf functionality like CRM or ERP, the mastery of data structures by younger developers is almost unheard of.

It’s true, most developers will probably never write an algorithm from scratch. But sooner or later, every coder will have to either implement or maintain a routine that has some algorithmic functionality. Unfortunately, when it comes to efficiency, you can’t afford to make uninformed decisions, as even the smallest error in choosing an algorithm can send your application screaming in agony to Valhalla.

So if you have been suffering from recursive algorithmic nightmares, or have never fully understood the concept of algorithmic efficiency, (or plan to interview for a position on my team), here is a short and concise primer on the subject.

First let’s start with definitions.

Best or Bust:
An important principal to remember when selecting algorithms is that there is no such thing as the “best algorithm” for all problems. Efficiency will vary with data set size and availability of computational resources (memory and processor).  What is trivial in terms of processing power for the NSA, could be prohibitive for the average Joe.

Efficiency:
Algorithmic efficiency is the measure of how well a routine can perform a computational task. One analogy for algorithmic efficiency and its dependence on hardware (memory capacity and processor speed) is the task of moving a ton of bricks from one location to another a mile a way.  If you use a Lamborghini for this job (small storage but fast acceleration), you will be able to move a small amount of bricks very quickly, but the down side is that you will have to repeat the trip multiple times.  On the other hand, if you use a flatbed truck (large storage but slow acceleration) you will be able to complete the entire project in a single run, albeit at slower pace.

Notation:
The expression for algorithmic efficiency is commonly referred to as “Big O” notation.  This is a mathematical representation of how the algorithm grows over time. When plotted as a function, algorithms will remain flat, grow steadily over time, or follow varying curves.

The Pessimistic Nature of Algorithms:
In the world of algorithm analysis, we always assume the worst case scenario.  For example, if you have an unsorted list of unique numbers and it’s going to take your routine an hour to go through it, then it is possible in the best case scenario that you could find your value on the first try (taking only a minute). But following the worst case scenario theory, your number could end up being the last one in the list (taking you the full 60 minutes to find it). When we look at efficiency, it’s necessary to assume the worst case scenario.

 Yaacov Apelbaum-big-o Plot
Image 1: Sample Performance Plots of Various Algorithms

O(1)

Performance is constant for time (processor utilization) or space (memory utilization) regardless of the size of the data set size. When viewed on a graph, these functions show no-growth curve and remain flat.

O(1) algorithm’s performance is also independent of the size of the data set on which it operates.

An example of this algorithm is testing a value of a variable based on some pre defined hash table.  The single lookup involved in this operation eliminates any growth curves.

O(n)
Performance will grow linearly and in direct proportion to the size of the input data set.  The algorithm’s performance is directly related to the size of the data set processed. 

O(2N) or O(10 + 5N) denote that some specific business logic has been blended with the implementation (which should be avoided if possible).

O(N+M) is another way of saying that two data sets are involved, and that their combined size determines performance.

An example of this algorithm is finding an item in an unsorted list or a Linear Search that goes down a list, one item at a time, without jumping.  The time taken to search the list gets bigger at the same rate as the list does.

O(nn)
Performance will be directly proportional to the square of the size of the input data set.  This happens when the algorithm processes each element of a set and that processing requires another pass through the set (this is the square value). Processing a lot of inner loops will also result in the form O(N3), O(N4), O(Nn.).

Examples of this type of algorithm are Bubble Sort, Shell Sort, Quicksort, Selection Sort or Insertion Sort.

O(2N)
Processing growth (data set size and time) will double with each additional element of the input data set. The execution time of O(2N) can grow exponentially.

The 2 indicates that time or memory doubles for each new element in data set.  In reality, these types of algorithms do not scale well unless you have a lot of fancy hardware.

O(log n) and O(n log n) 
Processing is iterative and growth curves peak at the beginning of the execution and then slowly tapper off as the size of the data sets increases.  For example, if a data set contains 10 items, it will take one second to complete; if the data set contains 100 items, it will takes two seconds; if the data set containing 1000 items, it will take three seconds, and so on. Doubling the size of the input data set has little effect on its growth because after each iteration the data set will be halved. This makes O(log n) algorithms very efficient when dealing with large data sets.

Generally, log N implies log2N, which refers to the number of times you can partition a data set in half, then partition the halves, and so on.  For example, for a data set with 1024 elements, you would perform 10 lookups (log21024 = 10) before either finding your value or running out of data.

Lookup # Initial Dataset New Dataset
1 1024 512
2 512 256
3 256 128
4 128 64
5 64 32
6 32 16
7 16 8
8 8 4
9 4 2
10 2 1

A good illustration of this principal can be found in the Binary Search, it works by selecting the middle element of the data set and comparing it against the desired value to see if it matches. If the target value is higher than the value of the selected element, it will select the upper half of the data set and perform the comparison again. If the target value is lower than the value of the selected element, it will perform the operation against the lower half of the data set. The algorithm will continue to halve the data set with each search iteration until it finds the desired value or until it exhausts the data set.

The important thing to note about log2N type algorithms is that they grow slowly. Doubling N has a minor effect on its performance and the logarithmic curves flatten out smoothly.

An example of these type of algorithms are Binary Search, Heap sort, Quicksort, or Merge Sort

Scalability and Efficiency
An O(1) algorithm scales better than an O(log N),
which scales better than an O(N),
which scales better than an O(N log N),
which scales better than an O(N2),
which scales better than an O(2N).

Scalability does not equal efficiency. A well-coded, O(N2) algorithm can outperform a poorly-coded O(N log N) algorithm, but this is only true for certain data set sizes and processing time. At one point, the performance curves of the two algorithms will cross and their efficiency will reverse.

What to Watch for when Choosing an Algorithm
The most common mistake when choosing an algorithm is the belief that an algorithm that was used successfully on a small data set will scale effectively to large data sets (factor 10x, 100x, etc.).

For most given situations, an O(N2) algorithm like Bubble Sort will work well. If you switch to a more complex O(N log N) algorithm like Quicksort you are likely to spend a long time refactoring your code and will only realize marginal performance gains.

More Resources
For a great illustration of various sorting algorithms in live action form, check out David R. Martin’s animated demo.  For more informal coverage of algorithms, check out Donald Knuth’s epic publication on the subject The Art of Computer Programming, Volumes 1-4.

If you are looking for some entertainment while learning the subject, check out AlgoRythimic’s series on sorting through dancing.

 

 

© Copyright 2011 Yaacov Apelbaum All Rights Reserved.