# Tutorial :The new BigInteger ### Question:

.NET 4.0 now has a new data type, System.Numeric.BigInteger. From what I understand, this can hold numbers that have, up to, 1 million digits. Simple arithmetic operations can be performed on this number. What I am wondering is how Microsoft implemented such a thing, given that it would obviously exceed 32-bits and even 64-bits. How does this not overflow?

### Solution:1

Arithmetic operations have been performed on structures that exceed the native integer (and floating point) sizes for quite some time. This is ordinarily done by turning a single conceptual arithmetic operation on the larger structure (addition, for example), into a series of operations upon multiple native types.

### Solution:2

BigInteger uses Arbitrary Precision Mathematics.

In computer science, arbitrary-precision arithmetic is a technique whereby calculations are performed on numbers whose digits of precision are limited only by the available memory of the host system.

Use it only when you need to work with very large numbers:

Arbitrary precision is used in applications where the speed of arithmetic is not a limiting factor, or where precise results with very large numbers are required.

### Solution:3

Internally, the BigInteger type is implemented as an array of unsigned integers (uint32[] in c#) and another field that specifies the sign.

The array gives the type the ability to store such large numbers and the methods and operators hide away the details of dealing with the complicated structure making it easy to use.

### Solution:4

Exactly the same way that you do arithmetic with the digits 0-9. You do not have to borrow someone elses fingers when you make change for a twenty. The big integer class uses 32 bit or 64 bit integers in pretty much the same way you use digits. This is oversimplifying quite a bit, particularly as numbers get large