Can I sum integer input from terminal without saving the input as a variable? - c

I'm trying to write a code for digital root of an extremely big number and can't save it as a variable. Is it possible to do without it?

What you're looking to do is to repeatedly add the digits of a number until you're left with a single digit number, i.e. given 123456, you want 1 + 2 + 3 + 4 + 5 + 6 = 21 ==> 2 + 1 = 3
For a number with up to 50 million digits, the sum of those digits will be no more than 500 million which is well within the range of a 32-bit int.
Start by reading the large number as a string. Then iterate over each character in the string. For each character, verify that it's a character digit, i.e. between '0' and '9'. Convert that character to the appropriate number, then add that number to the sum.
Once you've done that, you've got the first-level sum stored in an int. Now you can loop through the digits of that number using x % 10 to get the lowest digit and x / 10 to shift over the remaining digits. Once you've exhausted the digits, repeat the process until you're left with a value less than 10.

Related

Take two integers a,b and print all the even numbers in between them, excluding the input integers

Take two integers a,b and print all the even numbers in between them, excluding the input integers.
Constraints:
0 <= a < b <= 100
Input:
Two integers each in a new line.
Output:
Each line in the output contains an even integer between a,b in ascending order.
Example:
Input:
3
12
Output:
4
6
8
10
I could not find it's logic.
I don't get the question that well, but here's what I got out of it:
for i in range(a + 1, b):
if i % 2 == 0:
print(i)
This loop goes through every letter between your 2 constraints (a and b), checks if its divisible by 2, and if it is (which means it's even), it prints it.

Why is there a difference in precision range widths for decimal?

As is evident by the MSDN description of decimal certain precision ranges have the same amount of storage bytes assigned to them.
What I don't understand is that there are differences in the sizes of the range. How the range from 1 to 9 of 5 storage bytes has a width of 9, while the range from 10 to 19 of 9 storage bytes has a width of 10. Then the next range of 13 storage bytes has a width of 9 again, while the next has a width of 10 again.
Since the storage bytes increase by 4 every time, I would have expected all of the ranges to be the same width. Or maybe the first one to be smaller to reserve space for the sign or something but from then on equal in width. But it goes from 9 to 10 to 9 to 10 again.
What's going on here? And if it would exist, would 21 storage bytes have a precision range of 39-47 i.e. is the pattern 9-10-9-10-9-10...?
would 21 storage bytes have a precision range of 39-47
No. 2 ^ 160 = 1,461,501,637,330,902,918,203,684,832,716,283,019,655,932,542,976 - which has 49 decimal digits. So this hypothetical scenario would cater for a precision range of 39-48 (as a 20 byte integer would not be big enough to hold any 49 digit numbers larger than that)
The first byte is reserved for the sign.
01 is used for positive numbers; 00 for negative.
The remainder stores the value as an integer. i.e. 1.234 would be stored as the integer 1234 (or some multiple of 10 of this dependant on the declared scale)
The length of the integer is either 4, 8, 12 or 16bytes depending on the declared precision. Some 10 digit integers can be stored in 4 bytes however to get the whole range in would overflow this so it needs to go to the next step up.
And so on.
2^32 = 4,294,967,295 (10 digits)
2^64 = 18,446,744,073,709,551,616 (20 digits)
2^96 = 79,228,162,514,264,337,593,543,950,336 (29 digits)
2^128 = 340,282,366,920,938,463,463,374,607,431,768,211,456 (39 digits)
You need to use DBCC PAGE to see this, casting the column as binary does not give you the storage representation. Or use a utility like SQL Server internals viewer.
CREATE TABLE T(
A DECIMAL( 9,0),
B DECIMAL(19,0),
C DECIMAL(28,0) ,
D DECIMAL(38,0)
);
INSERT INTO T VALUES
(999999999, 9999999999999999999, 9999999999999999999999999999, 99999999999999999999999999999999999999),
(-999999999, -9999999999999999999, -9999999999999999999999999999, -99999999999999999999999999999999999999);
Shows the first row stored as
And the second as
Note that the values after the sign bit are byte reversed. 0x3B9AC9FF = 999999999

Integer compression method

How can I compress a row of integers into something shorter ?
Like:
Input: '1 2 4 5 3 5 2 3 1 2 3 4' -> Algorithm -> Output: 'X Y Z'
and can get it back the other way around? ('X Y Z' -> '1 2 4 5 3 5 2 3 1 2 3 4')
Note:Input will only contain numbers between 1-5 and the total string of number will be 10-16
Is there any way I can compress it to 3-5 numbers?
Here is one way. First, subtract one from each of your little numbers. For your example input that results in
0 1 3 4 2 4 1 2 0 1 2 3
Now treat that as the base-5 representation of an integer. (You can choose either most significant digit first or last.) Calculate the number in binary that means the same thing. Now you have a single integer that "compressed" your string of little numbers. Since you have shown no code of your own, I'll just stop here. You should be able to implement this easily.
Since you will have at most 16 little numbers, the maximum resulting value from that algorithm will be 5^16 which is 152,587,890,625. This fits into 38 bits. If you need to store smaller numbers than that, convert your resulting value into another, larger number base, such as 2^16 or 2^32. The former would result in 3 numbers, the latter in 2.
#SergGr points out in a comment that this method does not show the number of integers encoded. If that is not stored separately, that can be a problem, since the method does not distinguish between leading zeros and coded zeros. There are several ways to handle that, if you need the number of integers included in the compression. You could require the most significant digit to be 1 (first or last depends on where the most significant number is.) This increases the number of bits by one, so you now may need 39 bits.
Here is a toy example of variable length encoding. Assume we want to encode two strings: 1 2 3 and 1 2 3 0 0. How the results will be different? Let's consider two base-5 numbers 321 and 00321. They represent the same value but still let's convert them into base-2 preserving the padding.
1 + 2*5 + 3*5^2 = 86 dec = 1010110 bin
1 + 2*5 + 3*5^2 + 0*5^3 + 0*5^4 = 000001010110 bin
Those additional 0 in the second line mean that the biggest 5-digit base-5 number 44444 has a base-2 representation of 110000110100 so the binary representation of the number is padded to the same size.
Note that there is no need to pad the first line because the biggest 3-digit base-5 number 444 has a base-2 representation of 1111100 i.e. of the same length. For an initial string 3 2 1 some padding will be required in this case as well, so padding might be required even if the top digits are not 0.
Now lets add the most significant 1 to the binary representations and that will be our encoded values
1 2 3 => 11010110 binary = 214 dec
1 2 3 0 0 => 1000001010110 binary = 4182 dec
There are many ways to decode those values back. One of the simplest (but not the most efficient) is to first calculate the number of base-5 digits by calculating floor(log5(encoded)) and then remove the top bit and fill the digits one by one using mod 5 and divide by 5 operations.
Obviously such encoding of variable-length always adds exactly 1 bit of overhead.
Its call : polidatacompressor.js but license will be cost you, you have to ask author about prices LOL
https://github.com/polidatacompressor/polidatacompressor
Ncomp(65535) will output: 255, 255 and when you store this in database as bytes you got 2 char
another way is to use "Hexadecimal aka base16" in javascript (1231).toString(16) give you '4cf' in 60% situation it compress char by -1
Or use base10 to base64 https://github.com/base62/base62.js/
4131 --> 14D
413131 --> 1Jtp

Find amount of natural numbers generated from array of digit characters

Good day.
I have an array of digit characters ['9','0'] or ['9','5','2','0','0','0']. Need to find amount of all natural numbers with length equal to array size generated from a source array. For example for ['9','0'] it will be only 90 and answer is 1.
If array has no 0 and digits duplication amount of numbers can be calculated by factorial:
['5','7','2'] => 3! => 6
['1','2','3','4','5','6','7'] => 7! => 5040.
When zeros and duplication appears it's become changeable.
More examples: https://www.codewars.com/kumite/5a26eb9ab6486ae2680000fe?sel=5a26eb9ab6486ae2680000fe
Thank you
P.S. Better to find formula, I know how solve this problem by loops
def g(a)
answer = a.permutation(a.size)
.select{|x| x.join.to_i.to_s.split("").size == a.size }.to_a.uniq.size
answer
end
The only difference with '0' is that you can't have leading '0', aka first digit cannot be 0.
The formula given an array of N numbers become
(N - Number of zeros) * (N-1)!
When there is no zero, it is just N!.
Now consider the case with duplication, lets say there are K '1' in the array. For every permutation you have in the previous calculation, you can swap the '1' in K! permutation, thus you need to divide your result with K!. This need to be done for every single digits with duplicates. When there is no duplication (0 or 1 such digit), you are dividing by 0! or 1! thus division does not change the value.
Sample case: [0, 0, 1, 1]
4 digits, 2 zeros, 2 ones
(4-2) * 3! / (2! * 2!) = 3
Possible permutation: 1001, 1010, 1100

C: how to store integers by reading line by line with a random layout?

I need to read a file and store each number (int) in a variable, when it sees \n or a "-" (the minus sign means that it should store the numbers from 1 to 5 (1-5)) it needs to store it into the next variable. How should I proceed?
I was thinking of using fgets() but I can't find a way to do what I want.
The input looks like this:
0
0
5 10
4
2 4
5-10 2 3 4 6 7-9
4 3
These are x y positions.
I'd use fscanf to read one int at a time, and when it's negative, it is obviously the second part of a range. Or is -4--2 a valid input?

Resources