# Can a CPU use multiplication instead of adding?

I was thinking about the way CPU’s works, and I wonder if instead of adding one instruction at a time could not be done some sort of multiplication of rows and columns of transistors and obtain an area calculation which could give a faster and answer of the CPU and a lot of processing power? Or could be done maybe with some derivatives or integrals and use blocks of transistors like a graphics problem on an axis of numbers? Or could maybe calculate separated blocks with area calculation of geometric shapes?

Somehow maybe it’s possible to make a calculation of area instead of adding transistors because i think like this : 1+1+1+1+1+1+1+1+1+1= 10 or we can do 2×5=10 and we save a lot of bits…..

Observing members:
0
Composing members:
0
## 13 Answers

Multiplication **is** addition.

CPUs use repeated addition to multiply. There is also a lot of optimization under the hood that a programmer is blissfully unaware of. You can do this in parallel rather than sequentially for a quicker path to the answer. Graphics processors are very good at this. See CUDA

multiplication is a repetitive adding…meaning that if we want to add 1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1=20 we can simply do 2×10….it’s easy…all it’s take is to have in the processor a multiplication table or resolve with a calculation of area

Computers do not use repeated additon to multiply. If they did, It would take a long time to multiply 12345678×89654321.

The integers are converted to binary 0s and 1s. Then those numbers are multiplied by using a simple algorithm Here is an example from Stack Overflow

Multiply 100×55

1100100

0110111

=======

0000000

-1100100

—1100100

—-0000000

——1100100

——-1100100

———1100100

==============

1010101111100

There are other algorithms depending upon the processor. Some are optimized for multiplication by arrays.

but the basic programs are additive , how else could work?

I think (not sure) that the multiplication in LuckyGuy’s example is done using an equivalent of a multiplication table, like we have in our heads. We know 2×3 = 6, we don’t have to look it up.

In binary the multiplication has only two possibilities. 1×1 = 1 and everything else = 0, so the hardware (or algorithm) can be extremely simple.

@LuckyGuy CPU’s have low-level operations for multiplication and division, that do not use addition. Back before the late 1990’s, those operations were slower than addition, particularly if not multiplying by powers of 2, and so there were efforts taken by some programmers to try to cleverly use addition and bit-shifting to avoid using them in order to run some calculations faster.

However, since the late 1990s many CPUs have division operations that are so fast that it’s not worth trying to do that. And since modern CPUs are now very complex and compiler optimizations are also complex and very good, it’s becoming vanishingly impractical to go down to that level to try to do better than the compiler already does, unless you detect some particular bottleneck in some heavy calculations.

@Zaku Agreed. I was trying to give an example of how only addition is not used for multiplication. It uses 1×1=1 and everything else is zero. A simple shift is performed on the multiplicand as you work your way down the multiplier. This is very old and basic programming from the stone age. :-)

Back in the 1990s we had specially designed processors that computed linear interpolations for 2 dimensional arrays in just 2 steps. Almost everything was done in parallel. Now processors are even more powerful. They make our old 10 bit, “lightning fast,” processors look like abaci (abacuses).

@LuckyGuy @Zaku but first of all wouldn’t be necessary a definition of multiplication in the CPU itself?

@LuckyGuy I’m sorry – I mis-addressed that to you. I wasn’t disagreeing or responding to what you wrote. I meant to respond to @slick1985 ‘s response to you.

@slick1985 Well, yes, sort of. Computer circuits don’t really have *concepts* though – they have circuits that do things, that are designed by people who want certain operations done, which are based on concepts.

They (using incredible microprocessor factories) build a microscopic circuit machine that when it does the operation designed to do division, works with the representations of the numbers being multiplied in a way that produces the correct result of the multiplication. Because it is such a common function for computers, they build it so that the multiplication function does several things at once, in order for it to be possible to generate the correct result in the same amount of time it takes to generate the correct result for addition.

Imagine you have a bunch of abacus workers, and it takes up to 100 times as many abacus steps to do the most complex multiplication, compared to doing an addition. They did the metaphorical equivalent of having 100 abacus slaves trained to all do part of the calculation at the same time, in such a way that the 100 abacus slaves give you the result just as quickly as the one abacus slave can do the addition. Because the slave drummer (the CPU clock) coordinates their actions, this allows the overlords to have the effect of getting either sort of answer in the same time. Then their court wizard reduced all the slaves to less than the size of a speck of sand, so you can watch cat videos at as high a frame rate as possible.

Might table lookup be used? An 8 bit by 8 bit table takes up 64k. It could be used to multiply 8 bits at a time, adding the carry digits.

## Answer this question

This question is in the General Section. Responses must be helpful and on-topic.