When writing a "try" why should we always write a "catch"? [closed] - try-catch

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Recently I had an interesting conversation with a fellow developer who told me that every time you write a "try" it is mandatory to provide a "catch". He could not explain why this rule. He told me that it was a principle of good programming. Why this rule?
For your information I'm not agree with him. I think that sometimes you could write a "try" block with only a "finally" block. But it's true that I think if you write a "catch" you must do something in your catch. Never just re-throw the error.

You're right : you don't need to write a catch clause if you don't know what to do with the exception and just want to ensure your finally clause is executed.
It's bad practice to add a catch clause just to rethrow the exception.
As an aside, to illustrate that catch and finally are in fact related to two different (admittedly not foreign) problems, note that some languages use a different construct for the catching of exception and to ensure some code (usually resource release) is executed. Go use defer for example.

In most applications try/finally constructs heavily outnumber try/catch constructs.
Because it's much more common to have resources to clean up than it is to receive an exception you know how to handle.
However try/finally is nearly always replaceable by using in C#, so in C# your developer might have a point in that case; but it most definitely isn't a "a principle of good programming".

try
{
...
}
finally
{
...
}
Gives you the opportunity to execute code in the finally block that would otherwise get missed if an exception were thrown in the try block.
You only need to add a catch block if you have something specific to do when an exception occurs.

Related

What is the point of encapsulation? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I just don't see the point of encapsulation, I see that in some cases you can modify a getter/setter to modify the behavior of something or keep track of state, but whenever I am creating a non-conventional getter/setter, I use a word like "modify," or "obtain," so what is the point of wasting hours writing repetitive methods that are practically pointless and inefficient?
I just don't get it, when I was a wee young programmer, I was told by some guy in an IRC, that not having it was the cause for a bug in my program, but I have known for years now that is not the case, I've just been doing it anyway, so what is then point?
If I need to refactor later there are ways around it weird ones but they are ways at least in languages with overloaded operators, and API's don't always have to be backwards compatible so I don't see the point.
Can anyone enlighten me to the necessity of encapsulation?
In many cases you are right - small programs doesn't need encapsulation probably.
Some MS infrastructures (C#/WPF I think in several binding scenarios) requires encapsulation (using properties) and will not work without it.
If you do more in get / set than changing the value or returning it - it will make your code nicer and more robust (do checks, or other staff in the setter for example).
No one forces you to use it anyway...

Context aware auto-complete [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm trying to implement an autocomplete algorithm for a programming language. I want it to be context aware, meaning that suggestions must appear relative to the statement the user is currently typing.
What is the best way to go around this? What algorithms should I be looking into?
You do not actually need to parse the language to do this.
Assuming you have a list of valid symbols you need only choose the most likely completions when the user presses the autocomplete key (say, TAB, eg). You can weight the symbols by their frequency in the code. You can also weight by symbol type, giving more weight to variable names than reserved words. For example, if the user types "th[TAB]" and they have a variable named "themes" which appears 50 times, that might be the top completion, with the reserved word "then" perhaps being 2nd.
To generate the frequency weighting you need to count the number of times each symbol appears in the code. This can be done using a standard string search algorithm.
If you do have a parser, you can do more fancy things. For example, if you determine all the methods of a class and the user enters the symbol for an instance of a class followed by a period, you can automatically display a list of the methods, because those are the only valid possibilities.
BTW: To build the symbol list will depend on the language. For example, if it is Java, you can use the built-in introspection methods to identify all the defined symbols.
You need a state machine that recognizes the grammar of your language. Additionally, the state transitions should be weighted according to their probability.
If the state of your engine is at public static, the weight of the state transition class could be higher than that of abstract. This would be necessary to display a practical number of options as suggestions.

Using Task vs Dispatcher for ui thread actions [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I know the technique about UI thread updating from another thread.
So I have these two methods/techniques, which one should I use?
Using Task:
var uiTask = Task.Factory.StartNew(() => {
// change something on ui thread
var action = theActionOnUiThread;
if (action != null) {
action();
}
}, CancellationToken.None, TaskCreationOptions.None, TaskScheduler.FromCurrentSynchronizationContext());
Using Dispatcher:
Dispatcher.CurrentDispatcher.BeginInvoke(
new Action(() => {
// change something on ui thread
var action = theActionOnUiThread;
if (action != null) {
action();
}
}));
From a technical point of view I doubt there is a 'best' here. however I'd go with the dispatcher approach:
it makes your intent more clear, namely that you want to get something done on the main ui thread
you don't need to boter with all the task factory options
Dispatcher makes it easier to hide everything behind an interface (1) allowing easy dependency injection and unit testing
(1) see here for example
TaskScheduler.FromCurrentSynchronizationContext() does not guarantee returning you a TaskScheduler for the UI thread.
In fact it can sometimes return null, although these cases are rare and generally involve spinning up your own dispatcher in a native app (for example the WIX bootstrapper).
So I'd say it's safer to use the dispatcher version.

How can you be DRY with a programming language that doesn't have Reflection? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
Any programming language that does not have a suitable reflection mechanism I find seriously debilitating for rapidly changing problems.
It seems with certain languages its incredible hard or not possible to do:
Convention over Configuration
Automatic Databinding
AOP / Meta programming
with out reflection.
Some example languages that do not have some sort of programmatic reflection are:
C, C++, Haskell, OCaml. I'm sure there are plenty more.
To show you can example of DRY (Don't Repeat Yourself) being violated by most of these languages is when you have to write Unit Tests. You almost always need to register your test cases in these languages outside of where you define the test.
How do programmers of these languages mitigate this problem?
EDIT: Common languages that do have reflection for those that do not know are: C#, Java, Python, Ruby, and my personal favorite F# and Scala.
EDIT: The two common approaches it seems are code instrumentation and code generation. However I have never seen instrumentation for C.
Instead of just voting to close, could some one please comment on why this should be closed and I'll delete the post.
You don't.
But you can keep the repetitions close to each other so when changing something, you see something else has to be changed too.
For example, I wrote a JSON-Parser that outputs objects, a typical call looks like this:
struct SomeStruct
{
int a;
int b;
double c;
typedef int serializable;
template<class SerializerT> void serialize(SerializerT& s)
{
s("a",a)("b",b)("c",c);
}
};
Sure, when you add a field, you have to add another field in the function, but maybe you don't want to serialize that field (something you'd have to handle in languages with reflection, too), and if you delete a field without removing it from the function, the compiler will complain.
I think it's a matter of degree. Reflection is just one very powerful method of avoiding repetition.
Any time you generalize a function from a specific case you are using DRY principle, the more general you make it the more DRY it is. Just because some languages don't get you where you get with reflection doesn't mean there aren't DRY ways of programming with them. They may not be as DRY, but that doesn't mean they don't have their own unique advantages which in total sum may outweigh the advantages of using a language that has reflection. (For example, speed consequences from heavy use of reflection could be a consideration.)
Also, one method of getting something like the DRY benefits of reflection with a language that doesn't support it is by using a good code-generation tool. In that case you modify the code for different cases once, in the code generation template, and the template pushes it out to different instances in code. (I'm not saying whether or not using code generation is a good thing, but with a good "active" generator it is certainly one way of getting something like the DRY benefit of reflection in a language that doesn't have reflection. And the benefits of code generation go beyond this simple benefit. I'm thinking of something like CodeSmith, although there are many others: http://www.codesmithtools.com/ )
Abstractly, do more at runtime, without the benefits of things like compile-time type checking (you have to essentially write your own type-checking routines) and beautiful code. E.g., use a table instead of a class. (But if you did this, why not use a dynamically-typed language instead?) This is often bad. I do not recommend this.
In C++, generic programming techniques allow you to programmatically include members of a class (is that what you want to do?) via inheritance.
One nice example for C++ unit testing is cxxtest:
http://cxxtest.tigris.org/. It uses convention and a python script to generate your C++ test suite by post-processing your C++ with python.
A good way to think about getting around restrictions in languages is Michael Feathers' notion of "seams". A seam is a place where your program can be changed without changing the code. For example, in C the pre-processor and linker provide seams. In C++ polymorphism is another place. In more dynamic languages like where you can change method definitions, or reflect, you get even more flexibility. Without the seams things can be more complicated and sometimes you just don't want to try to hammer a nail with your shoe but rather go with the flow of the tool at hand.

What do you do when you encounter a tricky problem? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What do you do when you encounter a programming problem that is really hard for you to solve, and you have no idea yet?
Usually, how do you solve it at last?
NOTES:
Could someone introduce something about problem solving practice?
If I can, I leave it alone for a while. Often the solution will pop into my head when I least expect it. (If only we always had the luxury of waiting - often we don't.)
Edit: Another hugely useful thing to do is describe the problem to someone else. Even if they can't help, the very act of explaining it to someone who's unfamiliar with the problem will often clarify things in your mind. Sometimes you get straight to a solution that way, without the other person saying a word. 8-)
I just think it over when a pencil and paper.
Break it down into each part
Look at what parts I know
Research parts I don't
Put it all together
Profit
For me the trick is breaking it into manageable bits.
-- Edit
I must agree with the poster above about talking to someone else, as well. Even if you don't have anyone you can talk to, explain it to a fluffy toy, and the answer will often become obvious.
I find using a whiteboard to explain the problem to someone else very useful.
Sometimes I'll search Stack Overflow to see if anyone has encountered the same or a similar problem; if they haven't, I'll sometimes post a question about it.
The book Peopleware put it in a nice way, that despite being a different context also works here.
The manager's function, they write, is
not to make people work but to make it
possible for people to work.
In this case you are your own manager, so its up to you to make it possible for yourself to work. If its something difficult you are struggling with, then you need to listen to yourself.. what is it you need in order to get started solving it.
For me, it can be that a major class in the project has the wrong name and is inelegant. In order to solve the problem in an elegant way these needs to be fixed first, otherwise it will end out as a half baked solution.
10 cents
For many problems, writing unit tests can help. Break it down (as silky suggests) and try writing tests for the various pieces. Then write code to make the tests pass. Check out some of the literature on TDD.
Writing throwaway "spike" code is also a handy way to figure out new things.
Well, it depends on the kind of problem, whether it's something you can research. For the things you can't, often specific design problems where I have problems keeping all the factors in mind at once, I've found two methods to work well:
Get rid of all possible distractions (computer, phone, people), e.g find an empty conference room. Take along pen and paper and draw a free-form diagram of the factors involved in the problem; sometimes tables also work well. I've found that the ability to concentrate hard without distractions and the graphical representation usually enable me to find a solution.
For really hard problems, sleep over it. Maybe that's just me, but I sometimes come up with the best ideas when I think about something in that half-dazed state right before I fall asleep - and strangely, I can always remember them come morning.

Resources