Efficient test for pythagorean triples in modulo integer space - c

I was wondering what the most effective formula, for testing if three numbers are a pythagorean triple is.
Just as a reminder: a pythagorean triple are three integers where a²+b²=c².
I mean not the most effective formula in terms of time, but a formula that is the most efficient in terms of not causing an overflow on a specific integer(lets say 32-bit unsigned int).
I was trying a bit with rearrangements of a*a + b*b == c*c:
Let's assume a<=b<c, then the best formula I could get to is:
2b*(c-b) == (a+b-c) * (a-b+c)
with this formula can be proven, that the right side is smaller than a*c and so should be the left side, but a*c doesn't look like a huge improvement of c*c.
So my question is, if there is a better formula for this conditional that works with bigger numbers without overflowing an integer space. The execution time of the formula doesn't matter that much, besides it should be O(1).
PS: I don't know if I should post such a question here or on Mathematics SE, but to me it seems to be more about programming.

EDIT If you need to have 32bit integers all the way down then you can just modify the math to fit your requirement. To keep it simple I do the math (squaring and summing) on 16bit chunks of data and use a struct that contains 2 unsigned ints as the result.
http://ideone.com/er2TaS
#include <iostream>
using namespace std;
struct u64 {
unsigned int lo;
unsigned int hi;
bool of;
};
u64 square(unsigned int a) {
u64 result;
unsigned int alo = (a & 0xffff);
unsigned int ahi = (a >> 16);
unsigned int aalo = alo * alo;
unsigned int aami = alo * ahi;
unsigned int aahi = ahi * ahi;
unsigned int aa1 = aalo & 0xffff;
unsigned int aa2 = (aalo >> 16) + (aami & 0xffff) + (aami & 0xffff);
unsigned int aa3 = (aa2 >> 16) + (aami >> 16) + (aami >> 16) + (aahi & 0xffff);
unsigned int aa4 = (aa3 >> 16) + (aahi >> 16);
result.lo = (aa1 & 0xffff) | ((aa2 & 0xffff) << 16);
result.hi = (aa3 & 0xffff) | (aa4 << 16);
result.of = false; // 0xffffffff^2 can't overflow
return result;
}
u64 sum(u64 a, u64 b) {
u64 result;
unsigned int a1 = a.lo & 0xffff;
unsigned int a2 = a.lo >> 16;
unsigned int a3 = a.hi & 0xffff;
unsigned int a4 = a.hi >> 16;
unsigned int b1 = b.lo & 0xffff;
unsigned int b2 = b.lo >> 16;
unsigned int b3 = b.hi & 0xffff;
unsigned int b4 = b.hi >> 16;
unsigned int s1 = a1 + b1;
unsigned int s2 = a2 + b2 + (s1 >> 16);
unsigned int s3 = a3 + b3 + (s2 >> 16);
unsigned int s4 = a4 + b4 + (s3 >> 16);
result.lo = (s1 & 0xffff) | ((s2 & 0xffff) << 16);
result.hi = (s3 & 0xffff) | ((s4 & 0xffff) << 16);
result.of = (s4 > 0xffff ? true : false);
return result;
}
bool isTriple(unsigned int a, unsigned int b, unsigned int c) {
u64 aa = square(a);
u64 bb = square(b);
u64 cc = square(c);
u64 aabb = sum(aa, bb);
return aabb.lo == cc.lo && aabb.hi == cc.hi && aabb.of == false;
}
int main() {
cout << isTriple(3,4,5) << endl;
cout << isTriple(2800,9600,10000) << endl;
return 0;
}
Conerting your 32bit integers to 64bit longs or even floating point doubles would edit reduce the chance of overflow and continue being, programmatically, O(1) since all the major architectures (x86, ARM, etc) have int to double conversion op codes at the low level and casting up to a long from int is also an O(1) operation.
bool isTriple(int a, int b, int c) {
long long bigA = a;
long long bigB = b;
long long bigC = c;
return bigA * bigA + bigB * bigB == bigC * bigC;
}

I think little rearrangement would help a lot.
a²+b²=c²
can be written as b²=c²-a²
which is b² = (c-a)(c+a)
and hence we arrive at
b/(c+a) = (c-a)/b
or (c+a)/b = b/(c-a)
Now using the above equation, you do not need to compute squares.
So we must do this
if(((c+a)/(double)b)==((double)(b)/(c-a)))
printf("Yes it is pythagorean triples");
else printf("No it is not");

Related

Interleave 4 byte ints to 8 byte int

I'm currently working to create a function which accepts two 4 byte unsigned integers, and returns an 8 byte unsigned long. I've tried to base my work off of the methods depicted by this research but all my attempts have been unsuccessful. The specific inputs I am working with are: 0x12345678 and 0xdeadbeef, and the result I'm looking for is 0x12de34ad56be78ef. This is my work so far:
unsigned long interleave(uint32_t x, uint32_t y){
uint64_t result = 0;
int shift = 33;
for(int i = 64; i > 0; i-=16){
shift -= 8;
//printf("%d\n", i);
//printf("%d\n", shift);
result |= (x & i) << shift;
result |= (y & i) << (shift-1);
}
}
However, this function keeps returning 0xfffffffe which is incorrect. I am printing and verifying these values using:
printf("0x%x\n", z);
and the input is initialized like so:
uint32_t x = 0x12345678;
uint32_t y = 0xdeadbeef;
Any help on this topic would be greatly appreciated, C has been a very difficult language for me, and bitwise operations even more so.
This can be done based on interleaving bits, but skipping some steps so it only interleaves bytes. Same idea: first spread out the bytes in a couple of steps, then combine them.
Here is the plan, illustrated with my amazing freehand drawing skills:
In C (not tested):
// step 1, moving the top two bytes
uint64_t a = (((uint64_t)x & 0xFFFF0000) << 16) | (x & 0xFFFF);
// step 2, moving bytes 2 and 6
a = ((a & 0x00FF000000FF0000) << 8) | (a & 0x000000FF000000FF);
// same thing with y
uint64_t b = (((uint64_t)y & 0xFFFF0000) << 16) | (y & 0xFFFF);
b = ((b & 0x00FF000000FF0000) << 8) | (b & 0x000000FF000000FF);
// merge them
uint64_t result = (a << 8) | b;
Using SSSE3 PSHUFB has been suggested, it'll work but there is an instruction that can do a byte-wise interleave in one go, punpcklbw. So all we need to really do is get the values into and out of vector registers, and that single instruction will then just care of it.
Not tested:
uint64_t interleave(uint32_t x, uint32_t y) {
__m128i xvec = _mm_cvtsi32_si128(x);
__m128i yvec = _mm_cvtsi32_si128(y);
__m128i interleaved = _mm_unpacklo_epi8(yvec, xvec);
return _mm_cvtsi128_si64(interleaved);
}
With bit-shifting and bitwise operations (endianness independent):
uint64_t interleave(uint32_t x, uint32_t y){
uint64_t result = 0;
for(uint8_t i = 0; i < 4; i ++){
result |= ((x & (0xFFull << (8*i))) << (8*(i+1)));
result |= ((y & (0xFFull << (8*i))) << (8*i));
}
return result;
}
With pointers (endianness dependent):
uint64_t interleave(uint32_t x, uint32_t y){
uint64_t result = 0;
uint8_t * x_ptr = (uint8_t *)&x;
uint8_t * y_ptr = (uint8_t *)&y;
uint8_t * r_ptr = (uint8_t *)&result;
for(uint8_t i = 0; i < 4; i++){
*(r_ptr++) = y_ptr[i];
*(r_ptr++) = x_ptr[i];
}
return result;
}
Note: this solution assumes little-endian byte order
You could do it like this:
uint64_t interleave(uint32_t x, uint32_t y)
{
uint64_t z;
unsigned char *a = (unsigned char *)&x; // 1
unsigned char *b = (unsigned char *)&y; // 1
unsigned char *c = (unsigned char *)&z;
c[0] = a[0];
c[1] = b[0];
c[2] = a[1];
c[3] = b[1];
c[4] = a[2];
c[5] = b[2];
c[6] = a[3];
c[7] = b[3];
return z;
}
Interchange a and b on the lines marked 1 depending on ordering requirement.
A version with shifts, where the LSB of y is always the LSB of the output as in your example, is:
uint64_t interleave(uint32_t x, uint32_t y)
{
return
(y & 0xFFull)
| (x & 0xFFull) << 8
| (y & 0xFF00ull) << 8
| (x & 0xFF00ull) << 16
| (y & 0xFF0000ull) << 16
| (x & 0xFF0000ull) << 24
| (y & 0xFF000000ull) << 24
| (x & 0xFF000000ull) << 32;
}
The compilers I tried don't seem to do a good job of optimizing either version so if this is a performance critical situation then maybe the inline assembly suggestion from comments is the way to go.
use union punning. Easy for the compiler to optimize.
#include <stdio.h>
#include <stdint.h>
#include <string.h>
typedef union
{
uint64_t u64;
struct
{
union
{
uint32_t a32;
uint8_t a8[4]
};
union
{
uint32_t b32;
uint8_t b8[4]
};
};
uint8_t u8[8];
}data_64;
uint64_t interleave(uint32_t a, uint32_t b)
{
data_64 in , out;
in.a32 = a;
in.b32 = b;
for(size_t index = 0; index < sizeof(a); index ++)
{
out.u8[index * 2 + 1] = in.a8[index];
out.u8[index * 2 ] = in.b8[index];
}
return out.u64;
}
int main(void)
{
printf("%llx\n", interleave(0x12345678U, 0xdeadbeefU)) ;
}

Zip two or more numbers together bitwise

What is the best way to zip two (or more) numbers' bit representations together in C/C++/Obj-C?
I have num one to three. Their binary representations are [abc, ABC, xyz]. I would like to produce a num with binary [aAxbBycCz]. I'm mainly working with numbers that are over 21 bits.
(Ignoring the limit on integers, endian-ness and whatnot).
Thanks, happy holidays guys :)
A solution that should work for any number of bits:
const unsigned int BITS = 21;
unsigned int zipper(unsigned a0, unsigned a1, unsigned a2)
{
unsigned int result = 0;
for (unsigned int mask = 1<<BITS; mask != 0; mask >>= 1)
{
result |= a0 & mask;
result <<= 1;
result |= a1 & mask;
result <<= 1;
result |= a2 & mask;
}
return result;
}
If you need more speed, do some precalculation:
static unsigned explode[]= { 0, 1, 0x1000, 0x1001, 0x1000000, 0x1000001, 0x1001000, 0x1001001 } ;
unsigned int zipper(unsigned a0, unsigned a1, unsigned a2)
{
return explode[a0] | ( explode[a1] << 1) | ( explode[a2] << 2 ) ;
}
With the usual caveats for out of bounds, etc.
I would just do it by brute force:
unsigned int binaryZip(unsigned int a0, unsigned int a1, unsigned int a2)
{
return ((a0 << 0) & 0x001) |
((a1 << 1) & 0x002) |
((a2 << 2) & 0x004) |
((a0 << 2) & 0x008) |
((a1 << 3) & 0x010) |
((a2 << 4) & 0x020) |
((a0 << 4) & 0x040) |
((a1 << 5) & 0x080) |
((a2 << 6) & 0x100);
}

How can i swap every 2 bits in a binary number?

I'm working on this programming project and part of it is to write a function with just bitwise operators that switches every two bits. I've come up with a comb sort of algorithm that accomplishes this but it only works for unsigned numbers, any ideas how I can get it to work with signed numbers as well? I'm completely stumped on this one. Heres what I have so far:
// Mask 1 - For odd bits
int a1 = 0xAA; a1 <<= 24;
int a2 = 0xAA; a2 <<= 16;
int a3 = 0xAA; a3 <<= 8;
int a4 = 0xAA;
int mask1 = a1 | a2 | a3 | a4;
// Mask 2 - For even bits
int b1 = 0x55; b1 <<= 24;
int b2 = 0x55; b2 <<= 16;
int b3 = 0x55; b3 <<= 8;
int b4 = 0x55;
int mask2 = b1 | b2 | b3 | b4;
// Mask Results
int odd = x & mask1;
int even = x & mask2;
int newNum = (odd >> 1) | (even << 1);
return newNum;
The manual creation of the masks by or'ing variables together is because the only constants that can be used are between 0x00-0xFF.
The problem is that odd >> 1 will sign extend with negative numbers. Simply do another and to eliminate the duplicated bit.
int newNum = ((odd >> 1) & mask2) | (even << 1);
Minimizing the operators and noticing the sign extension problem gives:
int odd = 0x55;
odd |= odd << 8;
odd |= odd << 16;
int newnum = ((x & odd) << 1 ) // This is (sort of well defined)
| ((x >> 1) & odd); // this handles the sign extension without
// additional & -operations
One remark though: bit twiddling should be generally applied to unsigned integers only.
When you right shift a signed number, the sign will also be extended. This is known as sign extension. Typically when you are dealing with bit shifting, you want to use unsigned numbers.
Minimizing use of constants by working one byte at a time:
unsigned char* byte_p;
unsigned char byte;
int ii;
byte_p = &x;
for(ii=0; ii<4; ii++) {
byte = *byte_p;
*byte_p = ((byte & 0xAA)>>1) | ((byte & 0x55) << 1);
byte_p++;
}
Minimizing operations and keeping constants between 0x00 and 0xFF:
unsigned int comb = (0xAA << 8) + 0xAA;
comb += comb<<16;
newNum = ((x & comb) >> 1) | ((x & (comb >> 1)) << 1);
10 operations.
Just saw the comments above and realize this is implementing (more or less) some of the suggestions that #akisuihkonen made. So consider this a tip of the hat!

Swap byte 2 and 4 in a 32 bit integer

I had this interview question -
Swap byte 2 and byte 4 within an integer sequence.
Integer is a 4 byte wide i.e. 32 bits
My approach was to use char *pointer and a temp char to swap the bytes.
For clarity I have broken the steps otherwise an character array can be considered.
unsigned char *b2, *b4, tmpc;
int n = 0xABCD; ///expected output 0xADCB
b2 = &n; b2++;
b4 = &n; b4 +=3;
///swap the values;
tmpc = *b2;
*b2 = *b4;
*b4 = tmpc;
Any other methods?
int someInt = 0x12345678;
int byte2 = someInt & 0x00FF0000;
int byte4 = someInt & 0x000000FF;
int newInt = (someInt & 0xFF00FF00) | (byte2 >> 16) | (byte4 << 16);
To avoid any concerns about sign extension:
int someInt = 0x12345678;
int newInt = (someInt & 0xFF00FF00) | ((someInt >> 16) & 0x000000FF) | ((someInt << 16) & 0x00FF0000);
(Or, to really impress them, you could use the triple XOR technique.)
Just for fun (probably a tupo somewhere):
int newInt = someInt ^ ((someInt >> 16) & 0x000000FF);
newInt = newInt ^ ((newInt << 16) & 0x00FF0000);
newInt = newInt ^ ((newInt >> 16) & 0x000000FF);
(Actually, I just tested it and it works!)
You can mask out the bytes you want and shift them around. Something like this:
unsigned int swap(unsigned int n) {
unsigned int b2 = (0x0000FF00 & n);
unsigned int b4 = (0xFF000000 & n);
n ^= b2 | b4; // Clear the second and fourth bytes
n |= (b2 << 16) | (b4 >> 16); // Swap and write them.
return n;
}
This assumes that the "first" byte is the lowest order byte (even if in memory it may be stored big-endian).
Also it uses unsigned ints everywhere to avoid right shifting introducing extra 1s due to sign extension.
What about unions?
int main(void)
{
char tmp;
union {int n; char ary[4]; } un;
un.n = 0xABCDEF00;
tmp = un.ary[3];
un.ary[3] = un.ary[1];
un.ary[1] = tmp;
printf("0x%.2X\n", un.n);
}
in > 0xABCDEF00
out>0xEFCDAB00
Please don't forget to check endianess. this only work for little endian, but should not be hard to make it portable.

convert big endian to little endian in C [without using provided func] [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions must demonstrate a minimal understanding of the problem being solved. Tell us what you've tried to do, why it didn't work, and how it should work. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I need to write a function to convert big endian to little endian in C. I can not use any library function.
Assuming what you need is a simple byte swap, try something like
Unsigned 16 bit conversion:
swapped = (num>>8) | (num<<8);
Unsigned 32-bit conversion:
swapped = ((num>>24)&0xff) | // move byte 3 to byte 0
((num<<8)&0xff0000) | // move byte 1 to byte 2
((num>>8)&0xff00) | // move byte 2 to byte 1
((num<<24)&0xff000000); // byte 0 to byte 3
This swaps the byte orders from positions 1234 to 4321. If your input was 0xdeadbeef, a 32-bit endian swap might have output of 0xefbeadde.
The code above should be cleaned up with macros or at least constants instead of magic numbers, but hopefully it helps as is
EDIT: as another answer pointed out, there are platform, OS, and instruction set specific alternatives which can be MUCH faster than the above. In the Linux kernel there are macros (cpu_to_be32 for example) which handle endianness pretty nicely. But these alternatives are specific to their environments. In practice endianness is best dealt with using a blend of available approaches
By including:
#include <byteswap.h>
you can get an optimized version of machine-dependent byte-swapping functions.
Then, you can easily use the following functions:
__bswap_32 (uint32_t input)
or
__bswap_16 (uint16_t input)
#include <stdint.h>
//! Byte swap unsigned short
uint16_t swap_uint16( uint16_t val )
{
return (val << 8) | (val >> 8 );
}
//! Byte swap short
int16_t swap_int16( int16_t val )
{
return (val << 8) | ((val >> 8) & 0xFF);
}
//! Byte swap unsigned int
uint32_t swap_uint32( uint32_t val )
{
val = ((val << 8) & 0xFF00FF00 ) | ((val >> 8) & 0xFF00FF );
return (val << 16) | (val >> 16);
}
//! Byte swap int
int32_t swap_int32( int32_t val )
{
val = ((val << 8) & 0xFF00FF00) | ((val >> 8) & 0xFF00FF );
return (val << 16) | ((val >> 16) & 0xFFFF);
}
Update : Added 64bit byte swapping
int64_t swap_int64( int64_t val )
{
val = ((val << 8) & 0xFF00FF00FF00FF00ULL ) | ((val >> 8) & 0x00FF00FF00FF00FFULL );
val = ((val << 16) & 0xFFFF0000FFFF0000ULL ) | ((val >> 16) & 0x0000FFFF0000FFFFULL );
return (val << 32) | ((val >> 32) & 0xFFFFFFFFULL);
}
uint64_t swap_uint64( uint64_t val )
{
val = ((val << 8) & 0xFF00FF00FF00FF00ULL ) | ((val >> 8) & 0x00FF00FF00FF00FFULL );
val = ((val << 16) & 0xFFFF0000FFFF0000ULL ) | ((val >> 16) & 0x0000FFFF0000FFFFULL );
return (val << 32) | (val >> 32);
}
Here's a fairly generic version; I haven't compiled it, so there are probably typos, but you should get the idea,
void SwapBytes(void *pv, size_t n)
{
assert(n > 0);
char *p = pv;
size_t lo, hi;
for(lo=0, hi=n-1; hi>lo; lo++, hi--)
{
char tmp=p[lo];
p[lo] = p[hi];
p[hi] = tmp;
}
}
#define SWAP(x) SwapBytes(&x, sizeof(x));
NB: This is not optimised for speed or space. It is intended to be clear (easy to debug) and portable.
Update 2018-04-04
Added the assert() to trap the invalid case of n == 0, as spotted by commenter #chux.
If you need macros (e.g. embedded system):
#define SWAP_UINT16(x) (((x) >> 8) | ((x) << 8))
#define SWAP_UINT32(x) (((x) >> 24) | (((x) & 0x00FF0000) >> 8) | (((x) & 0x0000FF00) << 8) | ((x) << 24))
Edit: These are library functions. Following them is the manual way to do it.
I am absolutely stunned by the number of people unaware of __byteswap_ushort, __byteswap_ulong, and __byteswap_uint64. Sure they are Visual C++ specific, but they compile down to some delicious code on x86/IA-64 architectures. :)
Here's an explicit usage of the bswap instruction, pulled from this page. Note that the intrinsic form above will always be faster than this, I only added it to give an answer without a library routine.
uint32 cq_ntohl(uint32 a) {
__asm{
mov eax, a;
bswap eax;
}
}
As a joke:
#include <stdio.h>
int main (int argc, char *argv[])
{
size_t sizeofInt = sizeof (int);
int i;
union
{
int x;
char c[sizeof (int)];
} original, swapped;
original.x = 0x12345678;
for (i = 0; i < sizeofInt; i++)
swapped.c[sizeofInt - i - 1] = original.c[i];
fprintf (stderr, "%x\n", swapped.x);
return 0;
}
here's a way using the SSSE3 instruction pshufb using its Intel intrinsic, assuming you have a multiple of 4 ints:
unsigned int *bswap(unsigned int *destination, unsigned int *source, int length) {
int i;
__m128i mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11, 4, 5, 6, 7, 0, 1, 2, 3);
for (i = 0; i < length; i += 4) {
_mm_storeu_si128((__m128i *)&destination[i],
_mm_shuffle_epi8(_mm_loadu_si128((__m128i *)&source[i]), mask));
}
return destination;
}
Will this work / be faster?
uint32_t swapped, result;
((byte*)&swapped)[0] = ((byte*)&result)[3];
((byte*)&swapped)[1] = ((byte*)&result)[2];
((byte*)&swapped)[2] = ((byte*)&result)[1];
((byte*)&swapped)[3] = ((byte*)&result)[0];
This code snippet can convert 32bit little Endian number to Big Endian number.
#include <stdio.h>
main(){
unsigned int i = 0xfafbfcfd;
unsigned int j;
j= ((i&0xff000000)>>24)| ((i&0xff0000)>>8) | ((i&0xff00)<<8) | ((i&0xff)<<24);
printf("unsigned int j = %x\n ", j);
}
Here's a function I have been using - tested and works on any basic data type:
// SwapBytes.h
//
// Function to perform in-place endian conversion of basic types
//
// Usage:
//
// double d;
// SwapBytes(&d, sizeof(d));
//
inline void SwapBytes(void *source, int size)
{
typedef unsigned char TwoBytes[2];
typedef unsigned char FourBytes[4];
typedef unsigned char EightBytes[8];
unsigned char temp;
if(size == 2)
{
TwoBytes *src = (TwoBytes *)source;
temp = (*src)[0];
(*src)[0] = (*src)[1];
(*src)[1] = temp;
return;
}
if(size == 4)
{
FourBytes *src = (FourBytes *)source;
temp = (*src)[0];
(*src)[0] = (*src)[3];
(*src)[3] = temp;
temp = (*src)[1];
(*src)[1] = (*src)[2];
(*src)[2] = temp;
return;
}
if(size == 8)
{
EightBytes *src = (EightBytes *)source;
temp = (*src)[0];
(*src)[0] = (*src)[7];
(*src)[7] = temp;
temp = (*src)[1];
(*src)[1] = (*src)[6];
(*src)[6] = temp;
temp = (*src)[2];
(*src)[2] = (*src)[5];
(*src)[5] = temp;
temp = (*src)[3];
(*src)[3] = (*src)[4];
(*src)[4] = temp;
return;
}
}
EDIT: This function only swaps the endianness of aligned 16 bit words. A function often necessary for UTF-16/UCS-2 encodings.
EDIT END.
If you want to change the endianess of a memory block you can use my blazingly fast approach.
Your memory array should have a size that is a multiple of 8.
#include <stddef.h>
#include <limits.h>
#include <stdint.h>
void ChangeMemEndianness(uint64_t *mem, size_t size)
{
uint64_t m1 = 0xFF00FF00FF00FF00ULL, m2 = m1 >> CHAR_BIT;
size = (size + (sizeof (uint64_t) - 1)) / sizeof (uint64_t);
for(; size; size--, mem++)
*mem = ((*mem & m1) >> CHAR_BIT) | ((*mem & m2) << CHAR_BIT);
}
This kind of function is useful for changing the endianess of Unicode UCS-2/UTF-16 files.
If you are running on a x86 or x86_64 processor, the big endian is native. so
for 16 bit values
unsigned short wBigE = value;
unsigned short wLittleE = ((wBigE & 0xFF) << 8) | (wBigE >> 8);
for 32 bit values
unsigned int iBigE = value;
unsigned int iLittleE = ((iBigE & 0xFF) << 24)
| ((iBigE & 0xFF00) << 8)
| ((iBigE >> 8) & 0xFF00)
| (iBigE >> 24);
This isn't the most efficient solution unless the compiler recognises that this is byte level manipulation and generates byte swapping code. But it doesn't depend on any memory layout tricks and can be turned into a macro pretty easily.

Resources