DoxigAlpha

byteSwap

r = @byteSwap(a) with 2s-complement semantics. r and a may be aliases.

Asserts the result fits in r. Upper bound on the number of limbs needed by r is calcTwosCompLimbCount(8*byte_count).

Function parameters

Parameters

#
r:*Mutable
byte_count:usize

Used to indicate either limit of a 2s-complement integer.

Types

#
TwosCompIntLimit
Used to indicate either limit of a 2s-complement integer.
Mutable
A arbitrary-precision big integer, with a fixed set of mutable limbs.
Const
A arbitrary-precision big integer, with a fixed set of immutable limbs.
Managed
An arbitrary-precision big integer along with an allocator which manages the memory.

Returns the number of limbs needed to store `scalar`, which must be a

Functions

#
calcLimbLen
Returns the number of limbs needed to store `scalar`, which must be a
calcSetStringLimbCount
Assumes `string_len` doesn't account for minus signs if the number is negative.
calcNonZeroTwosCompLimbCount
Compute the number of limbs required to store a 2s-complement number of `bit_count` bits.
calcTwosCompLimbCount
Compute the number of limbs required to store a 2s-complement number of `bit_count` bits.
addMulLimbWithCarry
a + b * c + *carry, sets carry to the overflow bits
llcmp
Returns -1, 0, 1 if |a| < |b|, |a| == |b| or |a| > |b| respectively for limbs.

Source

Implementation

#
pub fn byteSwap(r: *Mutable, a: Const, signedness: Signedness, byte_count: usize) void {
    if (byte_count == 0) return;

    r.copy(a);
    const limbs_required = calcTwosCompLimbCount(8 * byte_count);

    if (!a.positive) {
        r.positive = true; // Negate.
        r.bitNotWrap(r.toConst(), .unsigned, 8 * byte_count); // Bitwise NOT.
        r.addScalar(r.toConst(), 1); // Add one.
    } else if (limbs_required > a.limbs.len) {
        // Zero-extend to our output length
        for (r.limbs[a.limbs.len..limbs_required]) |*limb| {
            limb.* = 0;
        }
        r.len = limbs_required;
    }

    // 0b0..01..1 with @log2(@sizeOf(Limb)) trailing ones
    const endian_mask: usize = @sizeOf(Limb) - 1;

    var bytes = std.mem.sliceAsBytes(r.limbs);
    assert(bytes.len >= byte_count);

    var k: usize = 0;
    while (k < (byte_count + 1) / 2) : (k += 1) {
        var i = k;
        var rev_i = byte_count - k - 1;

        // This "endian mask" remaps a low (LE) byte to the corresponding high
        // (BE) byte in the Limb, without changing which limbs we are indexing
        if (native_endian == .big) {
            i ^= endian_mask;
            rev_i ^= endian_mask;
        }

        const byte_i = bytes[i];
        const byte_rev_i = bytes[rev_i];
        bytes[rev_i] = byte_i;
        bytes[i] = byte_rev_i;
    }

    // Calculate signed-magnitude representation for output
    if (signedness == .signed) {
        const last_byte = switch (native_endian) {
            .little => bytes[byte_count - 1],
            .big => bytes[(byte_count - 1) ^ endian_mask],
        };

        if (last_byte & (1 << 7) != 0) { // Check sign bit of last byte
            r.bitNotWrap(r.toConst(), .unsigned, 8 * byte_count); // Bitwise NOT.
            r.addScalar(r.toConst(), 1); // Add one.
            r.positive = false; // Negate.
        }
    }
    r.normalize(r.len);
}