bitReverse
r = @bitReverse(a) with 2s-complement semantics. r and a may be aliases.
Asserts the result fits in r. Upper bound on the number of limbs needed by
r is calcTwosCompLimbCount(bit_count).
Function parameters
Parameters
- r:*Mutable
- bit_count:usize
Used to indicate either limit of a 2s-complement integer.
Types
- TwosCompIntLimit
- Used to indicate either limit of a 2s-complement integer.
- Mutable
- A arbitrary-precision big integer, with a fixed set of mutable limbs.
- Const
- A arbitrary-precision big integer, with a fixed set of immutable limbs.
- Managed
- An arbitrary-precision big integer along with an allocator which manages the memory.
Returns the number of limbs needed to store `scalar`, which must be a
Functions
- calcLimbLen
- Returns the number of limbs needed to store `scalar`, which must be a
- calcSetStringLimbCount
- Assumes `string_len` doesn't account for minus signs if the number is negative.
- calcNonZeroTwosCompLimbCount
- Compute the number of limbs required to store a 2s-complement number of `bit_count` bits.
- calcTwosCompLimbCount
- Compute the number of limbs required to store a 2s-complement number of `bit_count` bits.
- addMulLimbWithCarry
- a + b * c + *carry, sets carry to the overflow bits
- llcmp
- Returns -1, 0, 1 if |a| < |b|, |a| == |b| or |a| > |b| respectively for limbs.
Source
Implementation
pub fn bitReverse(r: *Mutable, a: Const, signedness: Signedness, bit_count: usize) void {
if (bit_count == 0) return;
r.copy(a);
const limbs_required = calcTwosCompLimbCount(bit_count);
if (!a.positive) {
r.positive = true; // Negate.
r.bitNotWrap(r.toConst(), .unsigned, bit_count); // Bitwise NOT.
r.addScalar(r.toConst(), 1); // Add one.
} else if (limbs_required > a.limbs.len) {
// Zero-extend to our output length
for (r.limbs[a.limbs.len..limbs_required]) |*limb| {
limb.* = 0;
}
r.len = limbs_required;
}
// 0b0..01..1000 with @log2(@sizeOf(Limb)) consecutive ones
const endian_mask: usize = (@sizeOf(Limb) - 1) << 3;
const bytes = std.mem.sliceAsBytes(r.limbs);
var k: usize = 0;
while (k < ((bit_count + 1) / 2)) : (k += 1) {
var i = k;
var rev_i = bit_count - i - 1;
// This "endian mask" remaps a low (LE) byte to the corresponding high
// (BE) byte in the Limb, without changing which limbs we are indexing
if (native_endian == .big) {
i ^= endian_mask;
rev_i ^= endian_mask;
}
const bit_i = std.mem.readPackedInt(u1, bytes, i, .little);
const bit_rev_i = std.mem.readPackedInt(u1, bytes, rev_i, .little);
std.mem.writePackedInt(u1, bytes, i, bit_rev_i, .little);
std.mem.writePackedInt(u1, bytes, rev_i, bit_i, .little);
}
// Calculate signed-magnitude representation for output
if (signedness == .signed) {
const last_bit = switch (native_endian) {
.little => std.mem.readPackedInt(u1, bytes, bit_count - 1, .little),
.big => std.mem.readPackedInt(u1, bytes, (bit_count - 1) ^ endian_mask, .little),
};
if (last_bit == 1) {
r.bitNotWrap(r.toConst(), .unsigned, bit_count); // Bitwise NOT.
r.addScalar(r.toConst(), 1); // Add one.
r.positive = false; // Negate.
}
}
r.normalize(r.len);
}