Last one out of the Kinjaverse, turn out the lights.

64 Bit Architecture: What It Means, and What It Means For You

With Apple's big announcement of new iPhones tomorrow, many in the tech community (both here at Gizmodo and, well, just about everywhere else) are abuzz about the new A7 processor...chiefly, the move (the first of any cellphone) to 64-bit processor architecture.

However, many in the community don't know what this means (or are talking about it without really understanding it). That's ok! Processors are outrageously complex, and even those who understand it don't TRULY understand it, unless they're one of the few people who build computer hardware for a living (a group I'm not a part of).


That said, I'm going to try to explain it at a base level for those of you who are curious about what 64-bit MEANS...and what it means for you. I'm by no means an expert, so feel free to sound off in the comments if I've said something that is inaccurate.

I guess to talk about 64-bit, we first need to clear up what a bit IS. Computers operate with a binary number system (humans use decimal). This means that everything a computer deals with is just a string of 1's and 0's, and these digits (1 or 0) are referred to as a BIT. That's where internet speeds and storage sizes come from (all measured in either bits or bytes, a byte being 8 bits...a value that was chosen somewhat arbitrarily a long time ago).

At its simplest, the bit-value of a processor (in recent years, always either 32 or 64-bit) is just the measure of how many bits it uses for its memory addresses. This means that on a 32 bit computer, the address of a location in memory has 32 bits in it, and on a 64-bit computer, they use 64 bits.

So, knowing this, one can calculate the maximum possible amount of memory a processor can interface with by taking 2, and raising it to the power of the number of bits you have. So, what this means is that a 32-bit processor can see (or "address") 2^32 different memory locations. The actual amount of memory possible here also depends on what size chunks the processor addresses...for simplicities sake, lets just pretend our imaginary processor for the sake of this discussion addresses by the bit (usually, it'll deal with bytes or half-bytes).


With that said, we can see that a 64-bit system can address 2^64 different memory locations. If you do the math, you'll see that that is a giant amount of extra memory a 64 bit computer has access to. At the simplest level, this is what 32 vs 64-bit means.

We don't yet know how much memory the new iPhone has, but if it's over 2gb I'll be surprised. Either way, it is almost assuredly not going to have an amount of RAM that only a 64-bit processor can address. So the advantage of the new A7 processor is not going to have anything to do with memory (perhaps it will, circumstantially, result in faster RAM, but that's a different conversation).


To see the real advantage this new architecture represents, you have to look at how a CPU works. CPUs contain small buckets of memory, called registers, that it uses to load data into and operate on. A CPU can't operate directly on data stored in its RAM, it has to "pull it in" in order to work with it. In most cases, a CPU chooses the size of each register based on the size of its memory addresses...that's the absolute smallest they can be, because it needs to be able to load up those values. Making the registers larger than that can unnecessarily complicate things. So, a 32-bit CPU has 32-bit registers, while a 64-bit CPU has 64-bit registers.

What this means is that the largest possible number a CPU can operate on in a single operation is bound by its register size, which is linked to its "bit rating" (for lack of a better word). A calculation on large values (be it a gigantic integer, or a very precise floating point number) has to be broken into 2 or more operations on a 32-bit CPU, but the same calculation can be done in a single operation on a 64-bit CPU. 128bit values would take 4 operations on a 32-bit, vs 2 on a 64-bit, etc. (It's not quite that cut and dry...there's all of the "move" operations and the logic operations required to split up and put back together these large values, so it's not as simple as 1 vs 2, 2 vs 4, etc., but the analogy serves its purpose).


So, this is where the "twice as fast" saying about 64-bit vs 32-bit comes into play. That saying isn't entirely true, but the idea is that some operations can be made UP TO twice as fast.

So, what does this mean in your everyday life? Certain operations aren't going to be made any faster at all...applications that deal with just integer and character operations/comparisons, etc. won't be made any faster (a word processor app, for example). You won't be seeing a vast improvement in the performance of your notes app due to the new architecture, for example. However, things that are graphics intensive and require a lot of operations on pixel and color data, physics calculations, etc. will benefit greatly...these are large numbers being operated on, and there's a lot of them, so you can see where the larger registers can come into play.


There's a lot of talk about how hard/easy it's going to be for developers to implement this, and whether or not they'll leave behind older phones in order to only support 64-bit CPUs, and I'm of the opinion that that won't be an issue, but that remains to be seen. I think the point of this post is really to talk about what it REALLY means, and to help people understand what it's going to improve, and what isn't really going to change much.

Share This Story