few problems, approaches and solutions

blog moved to ciju.in

Blog moved to http://ciju.in

min-cost perfect matching

If you would like to play with a minimum-cost perfect matching, klein-tardos is a C++ implementation. The intent of the implementation was to know how the algorithm works. Also, most of the C++ I know is from my experience with this problem. So use should probably be limited to understanding and experimentation of the algorithm. My implementation is based on the algorithm presented in Algorithm Design by Kleinberg, Tardos. priqueue.h is an implementation of priority queue with deletion. indexedList.h provides an abstraction for two lists with a reverse function, which switches element from one list to other. It is used to represent the outgoing and incoming list of a vertex. Reverse function could be used to switch an edge from outgoing list to incoming list.

The algorithm in general is called Hungarian algorithm. The implemented algorithm is different from the once here, here and here. Each iteration finds and augments the shortest augmenting path from the bipartite graph using the Dijkstra’s algorithm. But essentially, it also uses the same fundamental concepts to solve the problem. The idea is to create a matching which doesn’t have negative cycle in its residual graph. A little more formal treatment would be.

Consider a bipartite graph G = (V,E) having cost associated with each edge. If M is a matching in G, GM denotes corresponding residual graph for M. Quoting from Algorithm Design.

Let M be a perfect matching. If there are no negative-cost directed cycles C in GM, then M is a minimum-cost perfect matching.

Prof. Golin’s notes explain the same concept as Equality graph.
Hungarian algorithm counts on this fact to find the minimum-cost matching. A more general version of this statement is also the basis for minimum-cost flow algorithms.

Hungarian algorithm, in each iteration, builds a larger matching while maintaining GM free from negative-cost directed cycles. In the end, algorithm reaches the perfect matching which also is a minimum-cost matching.

Tougher part is to avoid the negative-cost cycles. Thats where the labeling comes in (Feasible labeling in Prof. Golin’s notes). After each augmentation the graph is relabeled, with reduced costs, which don’t change the solution, and also keep the GM free of negative loops.

If you want to know more, you should read the above mentioned notes and if possible Algorithm Design. I have tried to make the core part quite visible in the implementation. Infact, the main loop of the algorithm is just

void min_weight_assignment() {
    initiate_node_prices();

    for (int i=0; i<x.size(); i++) {
        dijkstra_spath(s);  // shortest paths from source
        augment_spath();
        reduce_node_prices();
    }
}

An observation. As I mentioned, all approaches are almost equivalent. The matrix method seems to be different from the others, but the similarity could be seen by considering column nodes as left nodes in bipartite graph, and the row nodes as right once.

my desktop customizations

I have Ubuntu installed on my machine. Its nice and easy to work on. Even then, I have my own set of customizations to add to the system.

The most drastic customization is disabling the window decorations. I have always found it of no use, except for controlling the window, which could easily be made into key-bindings. So now I have the luxury of more screen space, used for the purpose it should be.
Most of my time is spent in terminal, browser or acrobat reader. So again, there are some customizations to achieve better screen usage for these applications. I set my terminal to be a little translucent. Apart from the aesthetic appeal, it allows me to monitor whats going on in the background, if needed. Many a times I use F11 to go fullscreen with terminal. I use emacs for coding, blogging etc. Rather than using the default emacs window, I prefer using terminal to work with emacs (ex: emacs -nw). Coding with the background visible, has its own appeal. Also I can go fullscreen while coding, without much effort (F11). Using the terminal as frontend has another advantage of changing the font size on the run. When I want to have a birds eye view of the code, I just do Ctr-Minus (the minus sign on keyboard). Ctr-0 to go back to the default font size.
Now comes the browser. Again, in need for more screen space, I have Hide Menu extension to hide the whole menu bar. I also remove the Bookmarks Toolbar and Use small Icons and use F11 when convenient. For Acrobat Reader, I frequently use the Ctr-h and F9 key-bindings. Ctr-h hides most bars except the Menu bar and F9 hides the Menu bar. There are many more small customizations I have on my system. Aliases, shortcuts etc. The once presented here are just to pass on some ideas. There are many better places to know more about them.
Here are a few complimentary screen shots of my desktops. In case you are interested to know, I steal my backgrounds from wallpaper.deviantart.com.

Four Cards, Gnome, and Goblin

Another interesting puzzle, Four Cards, from here.

Four cards are placed on a square table, one card at each corner. A blind gnome and an evil goblin take turns to play the following game. The blind gnome gets to choose a subset of the four cards and flip them simultaneously. If all four cards are face up, he wins the game. Otherwise, the evil goblin get to rotate the table by an amount of his choice. Show that for any initial configuration of cards chosen by the evil goblin, the blind gnome can win the game in a finite number of steps with a deterministic strategy.

To get a better start with the problem, let me simplify it a little. In the modified problem, the gnome wins if all cards face the same. Example, either all cards face up, or all cards face down. This is a simpler problem.

Now lets take the simplified version. Consider the 4 cards on the table. Removing all the symmetrical configurations, there are just three distinct configurations of cards on the table, apart from the winning configuration. For notational convenience, I will use D for a cards facing down, U for a card facing up.

Case 1: Just one face differs from others.
     |D D|  
     |D C|

Case 2: two cards up and two down on the same side.
     |D C|
     |D C|

Case 3: Two cards up and two down diagonally.
     |D C|
     |C D|

The gnome has to know about which kinds of flips matter. He cant judge about which cards to flip, so he only has the option of differentiating flips by count and orientation. Again, there are just 3 kinds of flips that matter. Flipping one of them, flipping 2 diagonally, or flipping 2 sideways.

Now map the transformations possible by these flips on the 3 types of configuration.

1 -diag-> 1
1 -side-> 1
1 -one->  {2, 3, win}

2 -diag-> 2
2 -side-> {3, win}
2 -one->  1

3 -diag-> win
3 -side-> 2
3 -one->  1

Lets map the transformations for a the winning game. We would have to keep track of set possible cases that will occur in each transformation. Observe that case 1 is most distant from wining, case 2 the second most, and case 3 the closest. What we would like to achieve is a way to reduce the possible number of configuration, as we work with transformations. So if there is a case 3, finish off with it and so on. If a win happens in between, game ends automatically. So we only have to consider in the cases otherwise. Here’s the solution

{1,2,3} -diag-> {1,2} -side-> {1,3} -diag-> {1} -one-> {2,3} -diag-> {2} -side-> {3} -diag-> {sure win}

To the main problem. If the only winning position is all faces up, the game essentially remains the same except the number of configurations and transformations. Now the number of combinations that matter is 5. The types of transformations remain the same, except for the special case of flipping all the cards. Its only used in one case. For rest of the configurations, flipping all doesn’t change anything. Here’s a list cases and the transformations which are useful. I am showing win case if its the only case.

Case 1: 
     |U U|   1 -diag-> {1,2} ; 1 -side-> {1,2} ; 1 -one-> {3,4,5}
     |U D|   

Case 2: 
     |D D|   2 -diag-> {1,2} ; 2 -side-> {1,2} ; 1 -one-> {3,4,5}
     |D U|

Case 3:
     |U U|   3 -diag-> {3} ; 3 -side-> {4,5} ; 3 -one-> {1,2}
     |D D|

Case 4:
     |U D|   4 -diag-> {5} ; 4 -side-> {3} ; 4 -one-> {1,2}
     |D U|

Case 5:
     |D D|   5 -diag-> {4} ; 5 -side-> {3} ; 5 -one-> {2} ; 5 -flipall-> {sure win}
     |D D|

Now the solution for the original problem. What we would try to do it get all the cases to case 5, and then flip-all. Flip all is represented by fa.

{1,2,3,4,5} 
-fa->   {1,2,3,4} 
-diag-> {1,2,3,5} 
-fa->   {1,2,3} 
-side-> {1,2,4,5} 
-fa->   {1,2,4} 
-diag-> {1,2,5} 
-fa->   {1,2} 
-one->  {3,4,5} 
-fa->   {3,4} 
-diag-> {3,5} 
-fa->   {3} 
-side-> {4,5} 
-fa->   {4} 
-diag-> {5} 
-fa->   {sure win}

Essentially, what we did is to take the following action at each step.
if 5 one of possible states then flip-all
else if 4 among the possible states flip diagonal
else if 3, flip side
else if 2 or 1, flip one

So gnome can win the game in 15 steps or less.

splitting pdf slides: 4-up to 1-up


Many 4-up slides are quite inconvenient for onscreen reading. And quite a few times, no other options are available. Here is a small and rough solution to the problem. First, you need to have Python on your system. Download and extract the pyPdf package. Download code from here, to the source directory of the pyPdf package and run it from there. To use it give input and output file names and number of slides per page (<row> <col>) on the input pdf, and the program should work. If the slides don’t show up in proper order, give the order of slides you want, at the end of command line.

If you would like to install the pyPdf package, you would have to copy page4eachXobj function to PageObject in pdf.py and add ‘import copy’ to the top of the file. While copying, be careful with indentation. After this, my file could be placed anywhere.

The file has two methods to convert 4-up/6-up/… slides to a more convenient one slide per page. First method adv_slide_split gives better output(ex: similar to original slides), but works in very few cases. This method doesn’t use the row column information, given at input. Second method slide_split works by dividing the page into equal sized pages, usually not as good as the original 1-up, but works with most of the cases. If possible, the program will use the first method, else the second one.

Rest of the post is about my experience with the problem. Not a required read for using the program.

First off, you need to know that pdf is, in simple words, a collection of data and instructions on how to present the data. Its different/complex from markup languages, as it has to deal with various devices, like printer, and not just the web. And has to be able to represent complex typography and graphics etc, in a concise way.

A pdf file contains lots of <num> 0 obj and endobj pairs in it. Called Indirect Objects, these are used as a reference to objects in the file. Example, from collection of pages to postscript instructions and fonts and colors etc. There are few kinds and types of objects. Name objects starting with ‘/’, dictionary objects enclosed in ‘<<‘ ‘>>’ pair, array objects in ‘[' ']‘ pair, stream object mostly used for data. For our purpose, these should be enough. Indirect objects making it easy to share objects among objects! <num> 0 R is used to refer to an object. I have skimmed, skipped and missed much of the information. If you are interested, read the manual [4.5Mb]. Chapter 3 is the most relevant part.

I was lucky enough to stumble upon a nicely generated slide to dig into, similar to this one . A casual look into the raw file shows /Page tags.

7 0 obj <<
/Type /Page
/Contents 8 0 R
/Resources 6 0 R
/MediaBox [0 0 841.89 595.276]
/Parent 9 0 R
>> endobj

With the /Page tags are /Resources with an object reference. The referred object is /XObject dictionary with references to 4 objects.

6 0 obj <<
/XObject <</Im2 2 0 R /Im3 3 0 R /Im4 4 0 R /Im5 5 0 R>>
/ProcSet [ /PDF ]
>> endobj

I removed 3 of them from the dictionary. The pdf file indeed showed only the remaining slide (with empty space for others), while xpdf complained about missing references. Manual mentions that /XObject names external objects. That means the names must be used somewhere else. But searching for those names show only the /XObject entries. Page objects entry in manual mentions that /Contents is where the contents of page are described. But /Contents refers to a /FlateDecode stream (essentially a zipped binary of the description of page). Fortunately, I didn’t have to look inside it. Need was just to display those external references, one on each page. The external references in /XObject themselves were of type /XObject.

2 0 obj <<
/Type /XObject
...
>>
stream
...
endstream
endobj

Manual mentions details about the type and how to display it. To have a proof of concept, I changed a Page objects /Contents key to point to a custom stream, and /Page to contain one slide. Here’s the output. More work was needed to convert the whole document. New pages would be needed, as each page would have to be replaced with 4 new once, references would have to be updated, page counts at all levels need to be updated, xref would have to be updated etc.

This needed considerably more effort than I intended. Again, fortunately, I came across pyPdf. Its a versatile library. In the README itself, it shows the trick of cropping a page (Although I would call it zooming). So all I had to do was to copy page 4 times and only show relevant part in each of them.

The above method doesn’t depend on slides being available as separate resources, but used the concept of MediaBox. So it would work with most pdf files. But, if the slides are available as separate resources, the earlier method results in a pdf closer to the original sources. For comparison of the output, here are slides generated with method using XObject and the MediaBox one. Fortunately the library also makes it easier for me to implement the earlier /XObject method. It depends on the internals of pyPdf implementation. So either you would have to copy it to the pyPdf source file, or import all the internal classes of pyPdf.

In all, this is less than a days (cumulative) work.

Beust Sequence! aka sequence of numbers without repeating digits.

Original statement for the problem is here . The problem asks to generate a sequence of increasing numbers with no digits repeating in same number. The problem is quite similar to generating permutations, except for the part that number couldn’t start with 0. Here is my take over the problem. There are three basic things to take care of. First, generating permutations. Second, taking care of the initial 0 problem. And third, not going beyond the limit.

If we are given a list and asked to produce all permutations, all we have to do is, for each entry in the list, consider permutation of rest of the entries(this applies recursively). To generate permutations in lexical order, we just have to select the elements in increasing order.
In the code lst stores the lexically arranged digits. Each recursion chooses one element from the list, and passes rest to the next recursive call.

The second problem is to not count initial 0’s. This problem is handled by left and the associated recursive call. left essentially takes care of whether all the entries to left are 0. If all the left entries are 0, and the current choice is also 0, then we pass the whole choice list to the next recursive call.

The third is to not cross the limit. This is the part which convolutes my solution. Variables atmax, rnge, chldmaxd, maxd all work towards solving this part. The idea is to keep track from the left, of the digits which have reached the maximum. Similar to the case of left before, atmax lets know whether all choices to the left have reached there maximum. This is used to restrict the choices for the next recursive call. ex: if the limit is 1245, and the choice so far is 12, then the choice for next level will be restricted till 0-4. But if its 11, then the next choice could be between 0-9.

sol = []
def seq(inp):
    digits = tuple([i for i in range(10)])
    maxd = [int(i) for i in str(inp)]
    # number in reverse, with the first element always 0
    # ex: the maxd would be [0, 5, 2, 1] for limit of 125
    maxd.append(0)              
    maxd.reverse()    

    srep = tuple([str(i) for i in range(10)])
    
    def seq_gen(n, atmax, lst, s, left):
        if n==1:
            for i in lst:
                sol.append(s+srep[i])
            return
        
        rnge = (atmax and [maxd[n]] or [9])[0]
        
        chldmaxd = False
        for i in range(len(lst)):
            if lst[i] <= rnge:
                if atmax and lst[i]==rnge: 
                    chldmaxd = True
                if left and lst[i] == 0:
                    seq_gen(n-1, chldmaxd, lst, s, True)
                else:
                    seq_gen(n-1, chldmaxd, lst[:i]+lst[i+1:]\
                                , s+srep[lst[i]], False)
            else:
                break;

    seq_gen(len(maxd)-1, 1, digits, "", True)

The above code’s performance could be improved with simple change in termination condition.

        if n==1:
            for i in lst:
                sol.append(s+srep[i])
            return

Timing analysis for the solution.
real 0m29.297s
user 0m28.762s
sys 0m0.344s

As approaches in Java performed quite faster than this, I tried my own implementation in C/C++, with Bob’s idea of using integers rather than string to save the sequence. Takes about 230ms($time ./a.out) to generate the whole sequence, on my system.

The code is available here

On the other hand, if we just want the count of such numbers possible (8877690), then here is a one approach. The numbers range from 1 digit to 10 digits, because there are only 10 digits and we cant go beyond that without repetition. So considering each such case(ex: 3 digit numbers), the most significant digit can’t be 0, because otherwise it just becomes a 2 digit number(which we account for separately). So there are 9 options for first number [1-9]. Second number onwards, we just have to count the permutation of the remaining digits. In our example of 3 digit numbers, this turns out to be 9*(9*8), written better as 9*(9!/7!).
Doing this for all 1, 2, …, 10 digits gives us the answer.

twiddling with weblogger.el (emacs + wordpress)

Although wordpress.com provides a feature rich editor, I prefer writing my posts in emacs. Edit (in emacs), copy, paste (to wordpress) is possible, but more interesting would be to post from withing emacs. Thats were weblogger.el comes in. Here is some help on how to set it up. But still, I was a little uncomfortable with the default configuration. For example, I am little more adapt to using \C-x\C-s in emacs. And each time I did this, the post would be published and the buffer closed. I would have to again fetch the blog entries (\M-x RET weblogger-fetch-entries).

This post is about the hacks made into weblogger.el to suit my needs. If it helps you, feel free1 to download the file from here. Rename it with extension .el.

Essentially what I wanted was for the modes to save the post (either published or as a draft), without closing the buffer. The change for this seemed to be simple, just remove the (burry-buffer), from the respective functions. The exception was that if we started with a new post, then saving the buffer would sent it to the server, but the buffer itself wouldn’t be updated with the information for the new post created at the server. Thus I changed the code to look for this exception while saving. If the post is new, then simply do (weblogger-fetch-entries).

Few more changes to customizing my editing experience. I like editing/programming in auto-fill mode. But available width for posts in wordpress themes usually wouldn’t match with that of auto-fill mode. The alternative was longlines mode. It allowed editing with word wrapping (soft newlines), and also didn’t introduce any (hard) new lines. Unfortunately, in this mode also, saving a post caused the post to show up all the soft newlines as newlines. What I needed was some way to turn off the longlines-mode before saving, and then again turn it on, if the buffer is not to be buried. I have introduced two new hooks, one to be run before saving the draft, and the other to be run afterwords, if the buffer is not to be closed.

Here are the weblogger related initializations from my .emacs

(add-hook 'weblogger-start-edit-entry-hook
	  (lambda()
	    (flyspell-mode 1)
	    (flyspell-buffer)	; spell check the fetched post
	    (longlines-mode)))
(add-hook 'weblogger-publish-entry-hook
	  (lambda()
	    (when longlines-mode
	      (longlines-mode-off))))
(add-hook 'weblogger-publish-entry-end-hook
	  (lambda()
	    (longlines-mode)))

(weblogger-select-configuration "my configuration name" )

Two tools of trade that helped me the most were eval-buffer and \C-h f.

  • \M-x eval-buffer evaluates a buffer, so that you don’t have to restart emacs after each change, to see its effects.
  • Emacs lisp is a self documenting system, where you provide the description of the function in the function itself. \C-h f <name of function> shows the description of the function.

Thats how I tried to understand the code structure, and various library functions. To know the key bindings and command options, try \C-h a weblogger. Its fairly easy to change the key binding by editing the weblogger.el file.

I am fairly novice at emacs and emacs lisp. So please do consider the chances of disk formatting bugs lurking in my code.

google treasure hunt

Two problems from Google treasure hunt, which although not quite tough, but nevertheless were interesting.

One was about a robot starting from top left corner of a matrix, trying to reach the bottom right corner. Robot could only go right or down. The problem was to find the number of unique possible paths. My first thought was to use dynamic programming. Some more thoughts, and I was able to see a pattern in the solution. The basic idea was, like in dynamic programming, to solve the smaller problems first. The observation was that the paths could be calculated one level at a time. The paths are calculated along a line parallel to the anti-diagonal, as it sweeps through the matrix from destination cell towards the starting cell.

Here’s the code doing just that.

int main(int argc, char ** argv)
{
  int r, c, i, j, diag;
  unsigned long long *pa;
  assert(argc == 3);

  r = atoi(argv[1]);
  c = atoi(argv[2]);

  diag = r<c? r:c;
  pa = calloc( diag, sizeof pa[0] );

  for (i=0;i<diag;i++)
    pa[i] = 0;
  pa[0] = 1;

  for (i=0;i<r+c-2;i++)
    for (j=diag-1;j>0;j--)
      pa[j] = pa[j] + pa[j-1];

  printf("paths %llu \n", pa[diag-1]);

  return 0;
}

The implementation worked for small inputs. But for the actual input, long long was not enough (fortunately, and assumably intended by the problem designer). Some more thoughts about the process, and the insight came. The pattern seemed to be familiar, although incomplete. It was the Pascal’s triangle . It was easy to find out the entry needed from the Pascal’s triangle and its corresponding value ( \mathrm{C}^{n+m-2}_{n-1} ). But still, to calculate it, would need more precision than whats available with C data types (otherwise my program would have given the answer). Google also didn’t help. Looking for bignum library, lead me to this site , which provided an expression calculator showing capabilities of the library. So all I had to do was fill in the equation and copy paste the solution.

The other one was the last problem. Here is a verbatim instance of the problem.

Find the smallest number that can be expressed as
the sum of 3 consecutive prime numbers,
the sum of 35 consecutive prime numbers,
the sum of 553 consecutive prime numbers,
the sum of 829 consecutive prime numbers,
and is itself a prime number.
For example, 41 is the smallest prime number that can be expressed as
the sum of 3 consecutive primes (11 + 13 + 17 = 41) and
the sum of 6 consecutive primes (2 + 3 + 5 + 7 + 11 + 13 = 41).

The smaller sequence would always end with larger primes, than the longer sequence. Reason being that the same sum is distributed in a smaller set of consecutive primes. How does that help? Slide the smallest sequence to include the next prime, and simultaneously, update all later sequences to have sum greater than or equal to sequences smaller than them. Here’s a little cryptic code for the solution.

struct cseq {
  int seql;			/* seq len */
  int seqt;			/* seq top */
  int seqs;			/* seq sum */
} *sa;

void slide_sequence(struct cseq *s)
{
  s->seqt++;
  s->seqs -= prime[s->seqt - s->seql];
  s->seqs += prime[s->seqt];
}

int align_to_sum(struct cseq *s, int sum)
{
  while (s->seqs < sum) slide_sequence(s); 

  return s->seqs == sum;
}

int main()
{
  /* initialize sa and primes array and other variables. */ 

  for each consecutive prime {
    /* invar: bigger sequences have sum >= smaller once. */
    eqcnt=0;

    slide_sequence( &(sa[0]) );

    for (j=1;j<scnt;j++)
      if (align_to_sum(&(sa[j]), sa[0].seqs))
        eqcnt++;

    if (eqcnt == scnt-1) {
      if (isprime(sa[0].seqs) {
        print_sequence(sa, scnt);
        break;
      } else
        continue;
    }
  }
}

I used an already generated list of primes, from here .

Here are the timing details.

$time ./a.out
seq: 3, start: 221709, end: 221711, sum: 9221231
seq: 35, start: 23088, end: 23122, sum: 9221231
seq: 553, start: 1650, end: 2202, sum: 9221231
seq: 829, start: 930, end: 1758, sum: 9221231

real 0m0.210s
user 0m0.208s
sys 0m0.000s

Problem 4-2

Problem:

Given, an array of size n with distinct elements such that 0 <= A[i] <= n for each element i. Note that the array should be missing an integer from range [0,n], as the array has size n and the range has n+1 elements. The only way to access elements is through function

bit_access(int * A, int i, int j)

which returns the j^{th} bit of element i in array A.

We have to find out the missing number in O(n) time.

Approach:

Let b^{th} bit be the most significant set bit of n. What we observe is that the elements of A could be divided into two sets, based on the b^{th} bit of those elements. Those which have the bit set, and those which don’t. We know that number of elements with b^{th} bit 0 should be 2^{b-1}. And the remaining elements would have the bit set. If the number of 0’s doesn’t match the expected count, then the missing element has its b^{th} bit as 0, otherwise its 1. The algorithm works on this observation by first partitioning the array according to b^{th} bit and then applying the same logic recursively on the sub-array which has the element missing from the range.

Here’s the code for partition and recursive search of solution.

/* partition array with elements, according to b bit of each
   element and return the partition index. */
int partition(int * arr, int n, int b)
{
  int i, part=0;

  /* invar: part points to the first set bit seen so far. */
  for (i=0;i<n;i++) {
    if(!bit_access(arr, i, b)) {
      swapa(arr, i, part);
      part++;
    }
  }

  return part;
}

int find_misng_rec(int *arr, int n, int num, int bits)
{
  int part;

  if (n==0) return num;

  part = partition(arr, n, bits);

  if (part < 1<<bits)
    return find_misng_rec(arr, part, num<<1, bits-1);
  return find_misng_rec(arr+part, n-part, (num<<1)+1, bits-1);
}

Tail recursion is optimized away.
The above recursion is T(n) = T(n/2) + O(n), which solves to O(n), hence a solution for the problem.

subtleties of programming

While coding for the problem-2.3-7 , some how I was not in my default mode of not thinking while coding. To test my programs I was writing a random array generator. One part of it was to swap two array entries. Now, the most straight forward way to swap two array elements is

int tmp = arr[i];
arr[i]  = arr[r];
arr[r]  = tmp;

But, knowing the xor method, I thought why not save a variable. Basically, if a and b are to be swapped, then the code suffices.

a ^= b;
b ^= a;
a ^= b;

So now the question is, would this work on an array. Ex.

arr[i] ^= arr[r];
arr[r] ^= arr[i];
arr[i] ^= arr[r];

Here’s a subtle bug. If i and r are same, then it wont. The reason being that we loose the original value. If both indexes are same, the same array entry would be xor’ed with itself. Wiki page explains it better, and why it should be avoided in desktop systems. Again, I learn the evils of premature optimization. Subtleties of programming!

Follow

Get every new post delivered to your Inbox.