Node:Low-level Functions, Next:Random Number Functions, Previous:Floating-point Functions, Up:Top
This chapter describes low-level GMP functions, used to implement the high-level GMP functions, but also intended for time-critical user code.
These functions start with the prefix mpn_
.
The mpn
functions are designed to be as fast as possible, not
to provide a coherent calling interface. The different functions have somewhat
similar interfaces, but there are variations that make them hard to use. These
functions do as little as possible apart from the real multiple precision
computation, so that no time is spent on things that not all callers need.
A source operand is specified by a pointer to the least significant limb and a limb count. A destination operand is specified by just a pointer. It is the responsibility of the caller to ensure that the destination has enough space for storing the result.
With this way of specifying operands, it is possible to perform computations on subranges of an argument, and store the result into a subrange of a destination.
A common requirement for all functions is that each source area needs at least one limb. No size argument may be zero. Unless otherwise stated, in-place operations are allowed where source and destination are the same, but not where they only partly overlap.
The mpn
functions are the base for the implementation of the
mpz_
, mpf_
, and mpq_
functions.
This example adds the number beginning at s1p and the number beginning at
s2p and writes the sum at destp. All areas have n limbs.
cy = mpn_add_n (destp, s1p, s2p, n)
In the notation used here, a source operand is identified by the pointer to the least significant limb, and the limb count in braces. For example, {s1p, s1n}.
mp_limb_t mpn_add_n (mp_limb_t *rp, const mp_limb_t *s1p, const mp_limb_t *s2p, mp_size_t n) | Function |
Add {s1p, n} and {s2p, n}, and write the n
least significant limbs of the result to rp. Return carry, either 0 or
1.
This is the lowest-level function for addition. It is the preferred function
for addition, since it is written in assembly for most CPUs. For addition of
a variable to itself (i.e., s1p equals s2p, use |
mp_limb_t mpn_add_1 (mp_limb_t *rp, const mp_limb_t *s1p, mp_size_t n, mp_limb_t s2limb) | Function |
Add {s1p, n} and s2limb, and write the n least significant limbs of the result to rp. Return carry, either 0 or 1. |
mp_limb_t mpn_add (mp_limb_t *rp, const mp_limb_t *s1p, mp_size_t s1n, const mp_limb_t *s2p, mp_size_t s2n) | Function |
Add {s1p, s1n} and {s2p, s2n}, and write the
s1n least significant limbs of the result to rp. Return carry,
either 0 or 1.
This function requires that s1n is greater than or equal to s2n. |
mp_limb_t mpn_sub_n (mp_limb_t *rp, const mp_limb_t *s1p, const mp_limb_t *s2p, mp_size_t n) | Function |
Subtract {s2p, n} from {s1p, n}, and write the
n least significant limbs of the result to rp. Return borrow,
either 0 or 1.
This is the lowest-level function for subtraction. It is the preferred function for subtraction, since it is written in assembly for most CPUs. |
mp_limb_t mpn_sub_1 (mp_limb_t *rp, const mp_limb_t *s1p, mp_size_t n, mp_limb_t s2limb) | Function |
Subtract s2limb from {s1p, n}, and write the n least significant limbs of the result to rp. Return borrow, either 0 or 1. |
mp_limb_t mpn_sub (mp_limb_t *rp, const mp_limb_t *s1p, mp_size_t s1n, const mp_limb_t *s2p, mp_size_t s2n) | Function |
Subtract {s2p, s2n} from {s1p, s1n}, and write the
s1n least significant limbs of the result to rp. Return borrow,
either 0 or 1.
This function requires that s1n is greater than or equal to s2n. |
void mpn_mul_n (mp_limb_t *rp, const mp_limb_t *s1p, const mp_limb_t *s2p, mp_size_t n) | Function |
Multiply {s1p, n} and {s2p, n}, and write the
2*n-limb result to rp.
The destination has to have space for 2*n limbs, even if the product's most significant limb is zero. |
mp_limb_t mpn_mul_1 (mp_limb_t *rp, const mp_limb_t *s1p, mp_size_t n, mp_limb_t s2limb) | Function |
Multiply {s1p, n} by s2limb, and write the n least
significant limbs of the product to rp. Return the most significant
limb of the product. {s1p, n} and {rp, n} are
allowed to overlap provided rp <= s1p.
This is a low-level function that is a building block for general multiplication as well as other operations in GMP. It is written in assembly for most CPUs. Don't call this function if s2limb is a power of 2; use |
mp_limb_t mpn_addmul_1 (mp_limb_t *rp, const mp_limb_t *s1p, mp_size_t n, mp_limb_t s2limb) | Function |
Multiply {s1p, n} and s2limb, and add the n least
significant limbs of the product to {rp, n} and write the result
to rp. Return the most significant limb of the product, plus carry-out
from the addition.
This is a low-level function that is a building block for general multiplication as well as other operations in GMP. It is written in assembly for most CPUs. |
mp_limb_t mpn_submul_1 (mp_limb_t *rp, const mp_limb_t *s1p, mp_size_t n, mp_limb_t s2limb) | Function |
Multiply {s1p, n} and s2limb, and subtract the n
least significant limbs of the product from {rp, n} and write the
result to rp. Return the most significant limb of the product, minus
borrow-out from the subtraction.
This is a low-level function that is a building block for general multiplication and division as well as other operations in GMP. It is written in assembly for most CPUs. |
mp_limb_t mpn_mul (mp_limb_t *rp, const mp_limb_t *s1p, mp_size_t s1n, const mp_limb_t *s2p, mp_size_t s2n) | Function |
Multiply {s1p, s1n} and {s2p, s2n}, and write the
result to rp. Return the most significant limb of the result.
The destination has to have space for s1n + s2n limbs, even if the result might be one limb smaller. This function requires that s1n is greater than or equal to s2n. The destination must be distinct from both input operands. |
void mpn_tdiv_qr (mp_limb_t *qp, mp_limb_t *rp, mp_size_t qxn, const mp_limb_t *np, mp_size_t nn, const mp_limb_t *dp, mp_size_t dn) | Function |
Divide {np, nn} by {dp, dn} and put the quotient
at {qp, nn-dn+1} and the remainder at {rp,
dn}. The quotient is rounded towards 0.
No overlap is permitted between arguments. nn must be greater than or equal to dn. The most significant limb of dp must be non-zero. The qxn operand must be zero. |
mp_limb_t mpn_divrem (mp_limb_t *r1p, mp_size_t qxn, mp_limb_t *rs2p, mp_size_t rs2n, const mp_limb_t *s3p, mp_size_t s3n) | Function |
[This function is obsolete. Please call mpn_tdiv_qr instead for best
performance.]
Divide {rs2p, rs2n} by {s3p, s3n}, and write the quotient at r1p, with the exception of the most significant limb, which is returned. The remainder replaces the dividend at rs2p; it will be s3n limbs long (i.e., as many limbs as the divisor). In addition to an integer quotient, qxn fraction limbs are developed, and stored after the integral limbs. For most usages, qxn will be zero. It is required that rs2n is greater than or equal to s3n. It is required that the most significant bit of the divisor is set. If the quotient is not needed, pass rs2p + s3n as r1p. Aside from that special case, no overlap between arguments is permitted. Return the most significant limb of the quotient, either 0 or 1. The area at r1p needs to be rs2n - s3n + qxn limbs large. |
mp_limb_t mpn_divrem_1 (mp_limb_t *r1p, mp_size_t qxn, mp_limb_t *s2p, mp_size_t s2n, mp_limb_t s3limb) | Function |
mp_limb_t mpn_divmod_1 (mp_limb_t *r1p, mp_limb_t *s2p, mp_size_t s2n, mp_limb_t s3limb) | Macro |
Divide {s2p, s2n} by s3limb, and write the quotient at
r1p. Return the remainder.
The integer quotient is written to {r1p+qxn, s2n} and in addition qxn fraction limbs are developed and written to {r1p, qxn}. Either or both s2n and qxn can be zero. For most usages, qxn will be zero.
The areas at r1p and s2p have to be identical or completely separate, not partially overlapping. |
mp_limb_t mpn_divmod (mp_limb_t *r1p, mp_limb_t *rs2p, mp_size_t rs2n, const mp_limb_t *s3p, mp_size_t s3n) | Function |
[This function is obsolete. Please call mpn_tdiv_qr instead for best
performance.]
|
mp_limb_t mpn_divexact_by3 (mp_limb_t *rp, mp_limb_t *sp, mp_size_t n) | Macro |
mp_limb_t mpn_divexact_by3c (mp_limb_t *rp, mp_limb_t *sp, mp_size_t n, mp_limb_t carry) | Function |
Divide {sp, n} by 3, expecting it to divide exactly, and writing
the result to {rp, n}. If 3 divides exactly, the return value is
zero and the result is the quotient. If not, the return value is non-zero and
the result won't be anything useful.
These routines use a multiply-by-inverse and will be faster than
The source a, result q, size n, initial carry i,
and return value c satisfy c*b^n + a-i = 3*q, where
b=2^mp_bits_per_limb. The
return c is always 0, 1 or 2, and the initial carry i must also
be 0, 1 or 2 (these are both borrows really). When c=0 clearly
q=(a-i)/3. When c!=0, the remainder (a-i) mod
3 is given by 3-c, because b == 1 mod 3 (when
|
mp_limb_t mpn_mod_1 (mp_limb_t *s1p, mp_size_t s1n, mp_limb_t s2limb) | Function |
Divide {s1p, s1n} by s2limb, and return the remainder. s1n can be zero. |
mp_limb_t mpn_bdivmod (mp_limb_t *rp, mp_limb_t *s1p, mp_size_t s1n, const mp_limb_t *s2p, mp_size_t s2n, unsigned long int d) | Function |
This function puts the low
floor(d/mp_bits_per_limb ) limbs of q =
{s1p, s1n}/{s2p, s2n} mod 2^d at
rp, and returns the high d mod mp_bits_per_limb bits of
q.
{s1p, s1n} - q * {s2p, s2n} mod 2^(s1n* This function requires that s1n * This interface is preliminary. It might change incompatibly in future revisions. |
mp_limb_t mpn_lshift (mp_limb_t *rp, const mp_limb_t *sp, mp_size_t n, unsigned int count) | Function |
Shift {sp, n} left by count bits, and write the result to
{rp, n}. The bits shifted out at the left are returned in the
least significant count bits of the return value (the rest of the return
value is zero).
count must be in the range 1 to This function is written in assembly for most CPUs. |
mp_limb_t mpn_rshift (mp_limb_t *rp, const mp_limb_t *sp, mp_size_t n, unsigned int count) | Function |
Shift {sp, n} right by count bits, and write the result to
{rp, n}. The bits shifted out at the right are returned in the
most significant count bits of the return value (the rest of the return
value is zero).
count must be in the range 1 to This function is written in assembly for most CPUs. |
int mpn_cmp (const mp_limb_t *s1p, const mp_limb_t *s2p, mp_size_t n) | Function |
Compare {s1p, n} and {s2p, n} and return a positive value if s1 > s2, 0 if they are equal, or a negative value if s1 < s2. |
mp_size_t mpn_gcd (mp_limb_t *rp, mp_limb_t *s1p, mp_size_t s1n, mp_limb_t *s2p, mp_size_t s2n) | Function |
Set {rp, retval} to the greatest common divisor of {s1p,
s1n} and {s2p, s2n}. The result can be up to s2n
limbs, the return value is the actual number produced. Both source operands
are destroyed.
{s1p, s1n} must have at least as many bits as {s2p, s2n}. {s2p, s2n} must be odd. Both operands must have non-zero most significant limbs. No overlap is permitted between {s1p, s1n} and {s2p, s2n}. |
mp_limb_t mpn_gcd_1 (const mp_limb_t *s1p, mp_size_t s1n, mp_limb_t s2limb) | Function |
Return the greatest common divisor of {s1p, s1n} and s2limb. Both operands must be non-zero. |
mp_size_t mpn_gcdext (mp_limb_t *r1p, mp_limb_t *r2p, mp_size_t *r2n, mp_limb_t *s1p, mp_size_t s1n, mp_limb_t *s2p, mp_size_t s2n) | Function |
Calculate the greatest common divisor of {s1p, s1n} and
{s2p, s2n}. Store the gcd at {r1p, retval} and
the first cofactor at {r2p, *r2n}, with *r2n negative if
the cofactor is negative. r1p and r2p should each have room for
s1n+1 limbs, but the return value and value stored through
r2n indicate the actual number produced.
{s1p, s1n} >= {s2p, s2n} is required, and both must be non-zero. The regions {s1p, s1n+1} and {s2p, s2n+1} are destroyed (i.e. the operands plus an extra limb past the end of each). The cofactor r1 will satisfy r2*s1 + k*s2 = r1. The second cofactor k is not calculated but can easily be obtained from (r1 - r2*s1) / s2. |
mp_size_t mpn_sqrtrem (mp_limb_t *r1p, mp_limb_t *r2p, const mp_limb_t *sp, mp_size_t n) | Function |
Compute the square root of {sp, n} and put the result at
{r1p, ceil(n/2)} and the remainder at {r2p,
retval}. r2p needs space for n limbs, but the return value
indicates how many are produced.
The most significant limb of {sp, n} must be non-zero. The areas {r1p, ceil(n/2)} and {sp, n} must be completely separate. The areas {r2p, n} and {sp, n} must be either identical or completely separate. If the remainder is not wanted then r2p can be A return value of zero indicates a perfect square. See also
|
mp_size_t mpn_get_str (unsigned char *str, int base, mp_limb_t *s1p, mp_size_t s1n) | Function |
Convert {s1p, s1n} to a raw unsigned char array at str in
base base, and return the number of characters produced. There may be
leading zeros in the string. The string is not in ASCII; to convert it to
printable format, add the ASCII codes for 0 or A , depending on
the base and range.
The most significant limb of the input {s1p, s1n} must be non-zero. The input {s1p, s1n} is clobbered, except when base is a power of 2, in which case it's unchanged. The area at str has to have space for the largest possible number represented by a s1n long limb array, plus one extra character. |
mp_size_t mpn_set_str (mp_limb_t *rp, const char *str, size_t strsize, int base) | Function |
Convert bytes {str,strsize} in the given base to limbs at
rp.
str[0] is the most significant byte and str[strsize-1] is the least significant. Each byte should be a value in the range 0 to base-1, not an ASCII character. base can vary from 2 to 256. The return value is the number of limbs written to rp. If the most significant input byte is non-zero then the high limb at rp will be non-zero, and only that exact number of limbs will be required there. If the most significant input byte is zero then there may be high zero limbs written to rp and included in the return value. strsize must be at least 1, and no overlap is permitted between {str,strsize} and the result at rp. |
unsigned long int mpn_scan0 (const mp_limb_t *s1p, unsigned long int bit) | Function |
Scan s1p from bit position bit for the next clear bit.
It is required that there be a clear bit within the area at s1p at or beyond bit position bit, so that the function has something to return. |
unsigned long int mpn_scan1 (const mp_limb_t *s1p, unsigned long int bit) | Function |
Scan s1p from bit position bit for the next set bit.
It is required that there be a set bit within the area at s1p at or beyond bit position bit, so that the function has something to return. |
void mpn_random (mp_limb_t *r1p, mp_size_t r1n) | Function |
void mpn_random2 (mp_limb_t *r1p, mp_size_t r1n) | Function |
Generate a random number of length r1n and store it at r1p. The
most significant limb is always non-zero. mpn_random generates
uniformly distributed limb data, mpn_random2 generates long strings of
zeros and ones in the binary representation.
|
unsigned long int mpn_popcount (const mp_limb_t *s1p, mp_size_t n) | Function |
Count the number of set bits in {s1p, n}. |
unsigned long int mpn_hamdist (const mp_limb_t *s1p, const mp_limb_t *s2p, mp_size_t n) | Function |
Compute the hamming distance between {s1p, n} and {s2p, n}. |
int mpn_perfect_square_p (const mp_limb_t *s1p, mp_size_t n) | Function |
Return non-zero iff {s1p, n} is a perfect square. |
Everything in this section is highly experimental and may disappear or be subject to incompatible changes in a future version of GMP.
Nails are an experimental feature whereby a few bits are left unused at the
top of each mp_limb_t
. This can significantly improve carry handling
on some processors.
All the mpn
functions accepting limb data will expect the nail bits to
be zero on entry, and will return data with the nails similarly all zero.
This applies both to limb vectors and to single limb arguments.
Nails can be enabled by configuring with --enable-nails
. By default
the number of bits will be chosen according to what suits the host processor,
but a particular number can be selected with --enable-nails=N
.
At the mpn level, a nail build is neither source nor binary compatible with a non-nail build, strictly speaking. But programs acting on limbs only through the mpn functions are likely to work equally well with either build, and judicious use of the definitions below should make any program compatible with either build, at the source level.
For the higher level routines, meaning mpz
etc, a nail build should be
fully source and binary compatible with a non-nail build.
GMP_NAIL_BITS | Macro |
GMP_NUMB_BITS | Macro |
GMP_LIMB_BITS | Macro |
GMP_NAIL_BITS is the number of nail bits, or 0 when nails are not in
use. GMP_NUMB_BITS is the number of data bits in a limb.
GMP_LIMB_BITS is the total number of bits in an mp_limb_t . In
all cases
GMP_LIMB_BITS == GMP_NAIL_BITS + GMP_NUMB_BITS |
GMP_NAIL_MASK | Macro |
GMP_NUMB_MASK | Macro |
Bit masks for the nail and number parts of a limb. GMP_NAIL_MASK is 0
when nails are not in use.
|
GMP_NUMB_MAX | Macro |
The maximum value that can be stored in the number part of a limb. This is
the same as GMP_NUMB_MASK , but can be used for clarity when doing
comparisons rather than bit-wise operations.
|
The term "nails" comes from finger or toe nails, which are at the ends of a limb (arm or leg). "numb" is short for number, but is also how the developers felt after trying for a long time to come up with sensible names for these things.
In the future (the distant future most likely) a non-zero nail might be permitted, giving non-unique representations for numbers in a limb vector. This would help vector processors since carries would only ever need to propagate one or two limbs.