Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Recently I saw an answer on a question where they explained that addressing arrays in this way <number>[array] is valid C code.
How do square brackets work in C?
Example:
char x[] = {'A','B','C','D','E','F','G','H','I','J'};
printf("%d\n",5[X]);
//Will print 70 == 'F'
This kind of notation seems cumbersome and potentially confusing for everybody including the author.
Does this way of addressing arrays come with some justifiable advantage?
or
Can I continue with my life without worrying?
I have never encountered this in "real code" (i.e., outside of intentionally obfuscated things and puzzles with artificial limitations) so it would seem that it is quite universally agreed that this shouldn't be done.
However, I can come up with a contrived example where it might be considered by some (not necessarily me) a nicer syntax: if you have multiple pieces of data related to a single entity in a column, and you represent the rows as different arrays:
enum { ADA, BRIAN, CLAIRE };
const char *name[] = { "Ada", "Brian", "Claire" };
const unsigned age[] = { 30, 77, 41 };
printf("%s is %u years old\n", ADA[name], ADA[age]);
I will be the first to agree that this obfuscates the syntax by making it look like the people are the arrays instead of being the indexes, and I would prefer an array of struct in most cases. I think a case could be made for this being nicer-looking, though, or perhaps in some cases it would be a way to swap the rows and columns (arrays and indexes) with minimal edits elsewhere.
As far as I can tell, there are no technical pros or cons with either method. They are 100% equivalent. As the link you provided says, a[i] = *(p+i) = [addition is commutative] = *(i+p) = i[a].
For subjective pros and cons, well it's confusing. So the form index[array] is useful for code obfuscation, but other than that I cannot see any use of it at all.
One reason (but I'm really digging here) to use the standard way is that a[b+c] is not equivalent to b+c[a]. You would have to write (b+c)[a] instead to make it equivalent. This can be especially important in macros. Macros usually have parenthesis around every single argument in every single usage for this particular reason.
It's basically the same argument as to write if(2==x) instead of if(x==2). If you by accident write = instead of == you will get a compiler error with the first method.
Can I continue with my life without worrying?
Yes.
Yes, the pointer arithmetic is commutative because addition is commutative. References like a[n] are converted to *(a+n) but also n[a] is converted to *(n+a), which is identical. If you want to win ioccc competitions you must use this.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I've noticed in various C libraries, and most notably in the OpenGL API, that the word "name" is used to describe a simple integer value (often unsigned); example here. It sounds like what in other areas of development might be called an id (database keys) or a handle (file descriptors), which is why I'm surprised at the absence of these terms which I would ordinarily prefer over "name" since "name" conflicts with string identifiers or descriptions. I'm guessing the reasons to be historical.
Can anyone enlighten me with regard to the background of this term and whether or not it is still considered to be in vogue in the C community?
NB This is NOT a question about OpenGL so please could we keep the answers general - thanks.
Effectively the API you linked to is creating a structure in the memory space of the OpenGL implementation, and you are querying the fields of that struct using an enumeration to indicate which field. The documentation refers to this as 'symbolic name' which is what the equivalent token in the code to query of the struct in the program memory space would be:
x = texture.border_color becomes glGetTexParam(target, texture, GL_TEXTURE_BORDER_COLOR, &x)
so the enum parameter GL_TEXTURE_BORDER_COLOR is an equivalent to the name of the field's symbolic name border_color
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I waw wondering what could be some of the common pitfalls that a novice go programmer could fall in when writing (unintentionally slow go code).
1) First, I know that in python doing string concatenation can be (or used to be expensive), is that the same in go when trying to add one element to a string? As in "hello"+"World".
2) The other issue is that I find myself very often having to extend my slice with a list of more bytes (rather than 1 byte at a time). I have a "dirty" way of appending it by doing the following:
newStr := string(arrayOfBytes) + string(newBytesToAppend)
Is that way slower than just doing something like?
for _, b := range newBytesToAppend{
arrayOfBytes = append(arrayOfBytes, b)
}
Or is there a better way to append whole slices to other slices or maybe a built in way? It just seems to me a little odd that I would even have to write my own extend function (or even benchmark it)
Also, sometimes I end up having to loop through every element of the byte slice and for readability, I change the type of that current byte to a string. As in:
for _, b := range newBytesToAppend{
c := string(b)
//some more logic on c
logic(c) //logic
}
3) I was wondering, if converting types in go is expensive (specially between string to arrays) and if that might be one of the factors that might be making the code slow. Btw, sometimes I change types (to strings) very often, nearly every iteration.
But more generally, I was trying to search for the web a list of hints of what often are things that makes go code slow and was trying to change it so that it wouldn't (but didn't have that much luck). I am very much aware that this depends from application to application, but was wondering if there are any "expert" advice on what usually makes "novice" go code slow.
4) The last thing I can think of is, that sometimes I do know in advance the length of the slice, so I could just use arrays with fixed length. Could that change anything?
5) I have also made my own types as in:
type Num int
or
type Name string
Do those hinder performance?
6) Is there a general list of heuristic to watch out in go for code optimization? For example, is dereferencing a problem as it can be in C?
Use bytes.Buffer / Buffer.Write, it handles re-sizing the internal slice for you and it's by far the most effiecent way to manage multiple []bytes.
About the 2nd question, it's rather easy to answer that using a simple benchmark.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Today I was in a Webex meeting showing my screen with some Perl code I wrote. My boss suddenly told me while everyone else was looking and hearing that I had to remove trailing commas from my hash and array structures because it is a bad practice. I said I didn't think that was a bad practice in Perl, but he insisted and made me delete those commas just to show my script running in the meeting.
I still think it's not a bad practice in Perl, but I can be wrong. I actually find them pretty convenient and a good practice because they prevent me from adding new elements and forgetting to add the corresponding comma in the process.
But, I'd really like to know if it's a good or bad practice and be able to show it my boss (if he's wrong) with good arguments and even good sources for my arguments.
So, is it a bad practice to leave trailing commas?
This is an example:
my $hash_ref = {
key1 => 'a',
key2 => 'b',
key3 => 'c',
};
my $array_ref = [
1,
2,
3,
];
It's a great practice to have the trailing comma. Larry added it because he saw programmers add elements to a list (or whatever their language called it) but forget the separator character. Perl allows the trailing comma to make that less common. It's not a quirk or side effect of something else. That's what Perl wants you to do.
What is bad practice, however, is distracting a meeting full of people with something your boss could have corrected later. Unless the meeting was specifically a code review, your boss wasted a bunch of time. I've always wished that to join a video conference, you had to enter your per-minute compensation so a counter would show on everyone's screen to show how much money was being wasted. Spending a couple hundred dollars watching you remove commas on a working program would tamp down that nonsense.
So the PBP page referred to by Miller argues for making it easier to reorder the list by cutting and pasting lines; the mod_perl coding style document linked by Borodin argues for avoiding a momentary syntax error when you add stuff.
Much more significant than either, in my opinion, is that if you always have a trailing comma and you add a line, the diff only shows the line you added and the existing lines remain unchanged. This makes blame-finding better, and makes diffs more readable.
All three are good reasons for always using trailing commas, and there are in my opinion no good reasons not to do so.
The Apache mod_perl coding style document says this
Whenever you create a list or an array, always add a comma after the last item. The reason for doing this is that it's highly probable that new items will be appended to the end of the list in the future. If the comma is missing and this isn't noticed, there will be an error.
What your manager may have been thinking of is that doing the same thing in C is non-standard and non-portable, however there is no excuse for his extraordinary behaviour.
It is indeed a good practice and also mentioned in the famous PBP.
There is actually a Policy for perlcritic which always gets me: https://metacpan.org/pod/Perl::Critic::Policy::CodeLayout::RequireTrailingCommas
I favor leading commas, though I know its rather unpopular and seems to irritate the dyslexic. I also haven't been able to find a perltidy option for it. It fixes the line-change-diff problem as well (except for the first line, but that's not usually the one being changed in my experience), and I adore the way the commas line up in neat columns. It also works in languages that are white-space agnostic but don't like trailing commas on lists quite neatly. I think I learned this pattern while working with javascript...
my $hash_ref =
{ key1 => 'a'
, key2 => 'b'
, key3 => 'c'
};
my $array_ref =
[ 1
, 2
, 3
];
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
In this year's Google Code Jam I couldn't solve a single problem of the qualifying round. This (full code at the bottom) is what i came up with to solve the Fair and Square problem. And, it was judged to be incorrect, twice.
And, it's agonizingly demoralizing. I intend to change this scenario in the next Google Code Jam. I know one must practice(a lot!) to become better programmer and it takes years to become an expert. I believe i am ready to give that much effort. But, one can easily be overwhelmed by the size of the list of things to master.So, my question is that, Can a learning path be devised so that one can identify and master the skills necessary to perform better in Google Code Jam or similar online contests? If yes, then what would that be?
This path should allow one to master easier techniques first and then move on to harder ones as if the difficulty level was gradually increasing.
This is my (wrong) solution of the Fair and Square problem
#include<stdio.h>
#include<math.h>
int sqrRoot(double numberToCheck);
int isPalindrome(int numToCheck);
int main(void)
{
int i=0,j=0,cases=1,steps=0,low=0,high=0,isSquare=0,count=0,bit=0;
scanf("%d", &steps);
while (cases<=steps)
{
scanf(" %d %d", &low, &high );
for (j=low;j<=high;j++)
{
isSquare = sqrRoot(j);
if (isSquare == 1)
{
bit = isPalindrome(j);
if (bit==1)
{
count ++;
//printf("\n wowowo# %d", j);
}
}
}
printf("Case #%d: %d \n", cases,count);
count = 0;
bit = 0;
cases++;
}
return 0;
}
int sqrRoot(double numberToCheck)
{
double result = sqrt(numberToCheck);
int y=0;
y = result;
if (result == y)
{
return 1;
}
else
{
return 0;
}
}
int isPalindrome(int numToCheck)
{
int n=0,rev=0;
double dig=0.0;
n = numToCheck;
while(numToCheck>0)
{
dig = numToCheck % 10;
rev = rev * 10 + dig;
numToCheck = numToCheck / 10;
}
if (n==rev)
{
return 1;
}
return 0;
}
It seems, given your description of there being such a long and intimidating list of skills to learn, that the best approach would be to only indirectly prepare for this competition and directly focus on learning as much as possible about computer science. Being somewhat of an intermediate myself (and constantly striving to learn more), I can say that the periods of time when I have learned the most have been when I have sought not to acquire status (e.g. by winning awards) but honestly tried to emulate masters of the craft. In the case of computer science, reading books by and attempting to reproduce the code produced by people like Peter Norvig (through his books and MOOC courses), for example, has led to hard earned but significant improvement in my skill. I would say that above all else, such a strategy will allow you to improve to be able to succeed in competitions.
You need to master Algorithms and Data Structures. There is one book that you can read online for free that I highly recommend: Algorithms by Dasguspta.
Coursera.org has some courses on this too. Sedgewick's is really good and his book is a nice addition to any bookshelf.
You can also practice with ACM problems or TopCoder, which also provides some good tutorials
I'm not an expert of algorithmic contests, but they ask you to do a work which is very different from real-world algorithmic problem solving. First, they often require an exact solution, where in real-life, an approximation may often be a very good choice. It simplify the validation: only one output is possible. Also, many hard problems became trivial if you can solve them by approximation. The second difference is that the organizer have to be sure that the points awarded for a problem are related to the difficulty of the problem. The problem must not have a trivial solution, and must require some work to be solved. This means that most of the problems are variants of a well-known problem, and the competitors must find how to adapt the usual algorithms to this problem. This way, one can judge how fast and well they adapted those algorithms.
For you, this means, that you have to know the usual optimization algorithms. Fortunately, there are books about them. Unfortunately, i don't know them. (I mainly learn them at the university) If you want to maximize your chances, you better have to work with these general algorithms:
Pathfinding (Dept-first, Breath-first, Best-first, Dijkstra, A*)
Dynamic programming
Branch & Bound
Dived & conquer
Constraint programming
Graph algorithms
Perhaps Flow network, but not sure you will need it
Those subjects overlap, and it's not easy in general to find which one will be useful. Now, this is only the theory. Like i said, the difficulty is to adapt them to the problem you have. And for this, you can only train, a lot. Training is also required as it will help you to know how to implement them fast and efficient. There are many ways to write these algorithms and with the experience you will know them (google will help) and know how to choose them. Of course, you can't start google code jam, look at some problem and say... "hmm... it remember me some problem i read in a book" You would lost too much time trying to implement it for the first time. This is also a big difference with real-world algorithmic.
Anyway, there are many other competitions. You should try them if you fulfill the requirements. It is usually fun and interesting.
Your solution is the most simple we can find about the Fair and Square problem. Of course, it's not enough: you always have to find a better than first solution. I believe yours is wrong in the isSquare function. Floating point arithmetic add small imprecisions which can alter very simple tests like the one you wrote.
The cost of your algorithm come from the number of numbers you test. These numbers may be huge (and by the way your code won't work for these huge numbers as they can't be stored in the usual 32 bits integers of C) and iterating through them might take an eternity. To improve that, you can directly iterate through the square roots. If you iterate r from sqrt(A) to sqrt(B), then you are sure that the r*r are all the squares from A to B. Thus you don't have to test if their square root is an integer, and you have much less numbers to test.
This idea is very classical: reduce the size of the space to be iterated. You can further improve the algorithm by only iterating the roots which are palindromes. Fair and square roots also have some mathematical property, but i was too lazy to prove it, so i didn't look for another improvement. This is the final remark about your question: you may need basic mathematical knowledge to solve some of the problem. Usually, algorithmic skill is also the ability to prove the algorithms you use.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
A few times during discussion about programming, I reached a misunderstanding, caused by different views on how consecutive zero-based array elements are referred to using ordinal numerals. There seem to be two views on that:
a[0] = "first";
a[1] = "second";
a[2] = "third;
vs:
a[0] = "zeroth";
a[1] = "first";
a[2] = "second";
I always preferred the first, knowing that "n-th" element is "element of index n-1". But I was surprised how many people found that counter-intuitive and used the latter version.
Is one of those conventions more correct than the other? Which should I use during discussion or documentation to avoid misunderstanding?
I think the English meaning of the word "first" is unambiguous, and refers to the initial element of a sequence. Having "first" refer to the successor of the initial element is just wrong.
In cases where there might be confusion, I would say "the third element, at index 2".
The element index is pretty much language-dependent (e.g. C: 0, Lua: 1), whereas the fifth element is the fifth element, it's just the index that may be different ;)
I guess that's way too diffuse an answer...
In some languages, such as Pascal, you can specify the range of indexes explicitly. i.e.
var stuff : array[-3..3] of integer;
stuff[-3] is still the first element in the array, not the negative third.
Anyone saying 'zeroth' must not really believe in zero-based indexing.
The first is the one which is first taken from the stack.