I'm writing a program that needs to be able to set file permissions, but for whatever reason, chmod is not behaving in the way I would expect it to. For a couple tests, I attempted to create two different files (fileOne.txt, and fileTwo.txt). fileOne.txt should have permissions set to 600, while fileTwo.txt should have permissions set to 777.
Running my program results in the following:
fileOne.txt having permissions set to ---x-wx--T
fileTwo.txt having permissions set to -r----x--t
?? WHAT?
Below is my code. The results of my printf are as anticipated, (600, 777), so why would chmod not like this?
printf("chmod = %d\n", (int)getHeader.p_owner * 100 + (int)getHeader.p_group * 10 + (int)getHeader.p_world);
chmod(getHeader.file_name, (int)getHeader.p_owner * 100 + (int)getHeader.p_group * 10 + (int)getHeader.p_world);
The UNIX file system permissions are octal base and not decimal base. So multiplying it by 100 and 10 will give you unexpected results.
Permissions are reported in octal so 600 is in fact 0600 in C (or 384 in decimal).
Hence code should be:
printf("chmod = %d\n", (int)getHeader.p_owner * 100 + (int)getHeader.p_group * 10 + (int)getHeader.p_world);
chmod(getHeader.file_name, (int)getHeader.p_owner * 0100 + (int)getHeader.p_group * 010 + (int)getHeader.p_world);
Related
I saw Explain BigInt Like I'm Five, but I already understand what a BigInt is. I want to know how to make one though. I am trying to pick apart BigInt.js (the v8 bigint.cc is too large and I'm not familiar with C++).
For myself and perhaps others in the future, could one explain what the data model looks like for a BigInt that supports arbitrary sized integers? Basically, what is the object and its properties. I get that there are all the arithmetic functions implemented in unique ways for the BigInt, but I don't see what the kernel is. What is the essence of the structure of the BigInt? Perhaps this one will be slightly easier to grok.
A BigInt works exactly like you learned about integers in school, except the "digits" are not based on 10 symbols, they are based on 4294967296 (or 18446744073709551616, or specifically for ECMAScript 9007199254740991).
The kernel of the data model is simply a list of "digits" that are themselves fixed-size integers and a sign bit (or alternatively, the first "digit" is itself signed). Everything else would be a performance optimization.
In pseudo-code, it would look something like this:
record BigInt
sign: boolean
digits: sequence[unsigned_integer]
or this:
record BigInt
first_digit: signed_integer
digits: sequence[unsigned_integer]
Again, if you write down an integer in base-10, you write it as a sequence of digits and a sign, i.e. writing the current year, you would write: 2, 0, 1, 9, signifying (from right-to-left)
9 * 10^0 = 9
+ 1 * 10^1 = 10
+ 0 * 10^2 = 000
+ 2 * 10^3 = 2000
====
2019
Or, maybe you would write 7, E, 3, signifying (from right-to-left)
3_16 * 10_16^0
+ E_16 * 10_16^1
+ 7_16 * 10_16^2
which is the same as
3_16 * 16_10^0
+ E_16 * 16_10^1
+ 7_16 * 16_10^2
which is the same as
3_10 * 16_10^0 = 3_10
+ 14_10 * 16_10^1 = 224_10
+ 7_10 * 16_10^2 = 1792_10
=======
2019_10
And a BigInt is represented in exactly the same way, except the base is (much) larger.
Please read again till end (description updated)
I want something like this.
ex :
if (7200 / 42) is float then
floor(7200/42) + [7200 - {(floor(7200/42)) * 42}] / 10 ^ length of [7200 - {(floor(7200/42)) * 42}]
STEP : 1 => 171 + ((7200 - (171*42))/10 ^ len(7200-7182))
STEP : 2 => 171 + ((7200 - 7182)/10 ^ len(18))
STEP : 3 => 171 + (18/10 ^ 2)
STEP : 4 => 171 + (18/100)
STEP : 5 => 171 + 0.18
STEP : 6 => 171.18
I have written the code in SQL which actually works perfectly but the addition of 171 + 0.18 only gives 171
IF I can get "171/18" instead of "171.18" as string then it'd also be great. (/ is just used as separator and not a divison sign)
Following is the code I written
Here,
(FAP.FQTY + FAP.QTY) = 7200,
PRD.CRT = 42
(values only for example)
select
case when PRD.CRT <> 0 then
case when (FAP.FQTY + FAP.QTY)/PRD.CRT <> FLOOR((FAP.FQTY + FAP.QTY)/PRD.CRT) then --DETERMINE WHETHER VALUE IS FLOAT OR NOT
(floor((FAP.FQTY + FAP.QTY)/PRD.CRT)) +
((FAP.FQTY + FAP.QTY) - floor((FAP.FQTY + FAP.QTY)/PRD.CRT) * PRD.CRT) /
POWER(10, len(floor((FAP.FQTY + FAP.QTY) - floor((FAP.FQTY + FAP.QTY)/PRD.CRT) * PRD.CRT)))
else
(FAP.FQTY + FAP.QTY)/PRD.CRT -- INTEGER
end
else
0
end
from FAP inner join PRD on FAP.Comp_Year = PRD.Comp_Year and
FAP.Comp_No = PRD.Comp_No and FAP.Prd_Code = PRD.Prd_Code
I got all the values correct till 171 + 0.1800 correct but after that I am only receiving 171 in the addition. I want exactly 171.18.
REASON FOR THIS CONFUSING CALCULATION
Its all about accounting
Suppose, a box(or a cartoon) has 42 nos. of items.
A person sends 7200 items. how many boxes he has to send?
So that will be (7200/42) = 171.4257.
But boxes cannot be cut (its whole number i.e 171).
so 171 * 42 ie 7182 items.
Remaining items = 7200 - 7182 = 18.
So answer is 171 boxes and 18 items.
In short 171.18 or "171/18"
Please help me with this..
Thank you in advance.
Recognise that you're not producing an actual numeric result, I'd describe it as unhealthy to try to keep it using such a datatype1.
This produces the strings you're seeking, if I've understood your requirement:
;With StartingPoint as (
select 7200 as Dividend, 42 as Divisor
)
select
CONVERT(varchar(10),Quotient) +
CASE WHEN Remainder > 0 THEN '.' + CONVERT(varchar(10),Remainder)
ELSE '' END as FinalString
from
StartingPoint
cross apply
(select Dividend/Divisor as Quotient, Dividend % Divisor as Remainder) t
(Not tested for negative values. Some adjustments may be required. Technically % computes the modulus rather than the remainder, etc)
1Because someone might try and add two of these values together and I doubt that produces a correct result, not even necessarily if using the same Divisor to compute both.
Just another idea about how to calculate it.
Simple calculate the whole boxes.
And concatinate a dot with the remaining items (using a modulus).
Wrapped it all up in a CASE WHEN (or IIF) to avoid the divide by zero.
Example snippet:
declare #TestTable table (FQTY numeric(18,2), QTY numeric(18,2), CRT numeric(18,0));
insert into #TestTable (FQTY,QTY,CRT) values
(5000, 2200, 42),
(5000, 2200, 0),
( 100, 200, 10);
select *,
(CASE
WHEN CRT>0
THEN CONCAT(CAST(FLOOR((FQTY+QTY)/CRT) as INT),'/',CAST((FQTY+QTY)%CRT as INT))
ELSE '0'
END) AS Boxes
from #TestTable;
Result:
FQTY QTY CRT Boxes
------- ------- --- ------
5000.00 2200.00 42 171/18
5000.00 2200.00 0 0
100.00 200.00 10 30/0
The CONCAT returns a varchar, and so does the CASE WHEN.
But you could wrap that CASE WHEN in a CAST.
You're getting an automatic type conversion from int to decimal(10,0) which is probably not what you want.
https://learn.microsoft.com/en-us/sql/t-sql/data-types/int-bigint-smallint-and-tinyint-transact-sql?view=sql-server-2017
Check out the "Caution" box.
If you want a specific amount of precision, you'll need to explicitly cast() the values to the desired data type.
if i understand your logic correctly you want the remainder of 7200 divide by 42 and the remainder is to divide by 100
declare
#dividend int = 7200,
#divisor int = 42
select (#dividend / #divisor)
+ convert(decimal(10,4),
(#dividend % #divisor) * 1.0 / power(10, len(#dividend % #divisor)))
EDIT: change to handle the 10^len(remainder)
Let's say I want to write 1,2,3,4....up to 4.096B in a text file. What would be a time efficient way to do it. Just doing it sequentially is taking a long time. So wondering if there's a distributed way.
Thanks to all your comments on my question. It helped me solve this problem in a reasonable amount of time. Here's what I did -
Create a file using Excel to create a million integers from 0 - 1000000
Upload this file in Hadoop
Write a Hive query with 4296 lines like below -
a0 = SELECT IPDecimal + (100000 * 1) + 1 AS IPDecimal FROM #file;
a1 = SELECT IPDecimal + (100000 * 2) + 1 AS IPDecimal FROM #file;
.
.
.
a4295 = SELECT IPDecimal + (100000 * 4295) + 1 AS IPDecimal FROM #file;
Output the result of each SELECT statement above in a separate file and then consolidate the integers in the 4296 files in one single file
I have two functions written that have simple assignment statements with very simple expressions. The expressions are the same for both functions, however, they involve different variable types: One function uses an array of structs, the other just uses a typedef'd struct.
When running the functions, the second function fails to divide by 256, and I get very high values that are not "normalized". I have to uncomment the second line in the second function (valueB = valueB / 256) to get it to work.
The first function, however, works perfectly.
Heres the statement in Function One:
value = ((p[0].value * p2Area)+(p[1].value * p3Area)+(p[2].value * p0Area)+(p[3].value * p1Area) / 256);
Heres the statement in Function Two:
valueB = ((dataPoints.p0B * p2Area)+(dataPoints.p1B * p3Area)+(dataPoints.p2B * p0Area)+(dataPoints.p3B * p1Area) / 256);
//valueB = valueB / 256;
Why would this happen?
Also, I pass the functions the same numbers and it doesn't seem to help.
This is on MacOSX 10.6.8, inside Xcode 3.2.6
Are you absolutely sure the first one works properly? You have
value = ((p[0].value * p2Area)+(p[1].value * p3Area)+(p[2].value * p0Area)+(p[3].value * p1Area) / 256);
I think you want:
value = (((p[0].value * p2Area)+(p[1].value * p3Area)+(p[2].value * p0Area)+(p[3].value * p1Area)) / 256);
Similar thing with the second. I think it should be:
value = (((p[0].value * p2Area)+(p[1].value * p3Area)+(p[2].value * p0Area)+(p[3].value * p1Area)) / 256);
In both cases I think you want to divide the sum of the products by 256. Not just the last one. My change only involves placing an extra set of parentheses around the sum of the product subexpressions and dividing the entire thing by 256
In all languages there is an order by which mathematical (and all other operators are completed). It just so happens that * and / are higher in precedence than + and - in C/C++ You may refer to this link for more details.
To simplify what happened to you, I will create this simple equation:
2 + 4 + 6 + 4 / 2
Since division occurs first (and there are no parentheses to alter the order) it gets computed as:
2 + 4 + 6 + (4 / 2) = 14
Not:
(2 + 4 + 6 + 4) / 2 = 8
So my change to your code was the same as putting parentheses around 2 + 4 + 6 + 4 / 2 giving (2 + 4 + 6 + 4) / 2 and forcing the division to be done last after all the additions are completed.
Let's have a look at the picture.
The result is different even though the expression is the same.
Why does this happen?
I have to follow excel result, what should I have do with sql server?
No matter whatever the software is 1+1 will always yeild 2 and if its not you should check you calculation again. see below
SELECT ((4972000.0000) * (1.0000 - 4.4000/100.0000))
/ ((1.0000 + ((36.0000/365.0000)) * (13.0000 / 100.0000)))
RESULT: 4693057.996104
To get the result on upto four decimal places Use ROUND() function.
SELECT ROUND(((4972000.0000) * (1.0000 - 4.4000/100.0000))
/ ((1.0000 + ((36.0000/365.0000)) * (13.0000 / 100.0000))), 4)
RESULT: 4693057.996100