I read a solution in the original problem space during the INITSOLVE stage. Some multi-aggregated variables are ignored. I guess this is okay since their values can be inferred once other variables' values are fixed. However, the objective value of the read solution is now off.. since objective from those multi-aggregated variables are not included.. Is there anyway around this?
The objective coefficients of multi-aggregated variables are added to the variables of the active representation, so that the objective value of the solution should normally still be correct.
However, it can happen that the multi-aggregation was done by a dual argument, i.e., there might be solutions where the multi-aggregated variable is set to a different value, but you can still set it to the value given by the multi-aggregation without deteriorating the objective. Moreover, presolving might change bounds or fix variables based on this type of argument as well.
In this case, your solution might not "fit" into the presolved problem, but is "adjusted" to a solution with value not worse than your original solution. Is this the case? Is the objective value of the solution better?
Moreover, you should check the objective function value of the solution with SCIPgetSolOrigObj() in order to get the objective value in the original space, since the objective offset and factor can be changed during presolving.
Also, please check the values of the variables in the original problem to see how the solution differs from the one you read in.
Related
When parsing a file I need to detect whether an item with minimum and maximum occurrence of 1 has been processed already. Later on in validation I need to detect if it was not processed at all.
I can do this inelegantly with a count variable that increments each time but it is cumbersome and inelegant. Perhaps a boolean flag. In general I would use some form of a Sentinel Value, such as NULL for a pointer, or "" for a statically allocated string array. Or memset() zero for many items.
The problem is if the full range of the datatype is potentially valid input it gets very sticky trying to make a Sentinel.
If it is signed and only positive numbers are used, the Sentinel can be any negative number. If the data type is unsigned but values that would use the sign bit are not in use, then a negative number can be used.
If a larger data type can be used to store the value, the added range can be used for the SV. Although this may affect type-compatibility, truncation, promotion.
In an enum I can add an entry, creating an SV.
It gets difficult to keep track of all the ways of showing for each member of a structure whether it was initialized or not.
I almost forgot - an easy and universal way could be to make every variable dynamically allocated and initialized to NULL, even integers. Though a bit strange and slightly wasteful of memory perhaps, this would be highly consistent and would also allow boolean logic of conditional statements to work, eg:
if(age) print("Age is a valid variable with value: %d", *age);
Edit to clarify the question (no changes above):
I am parsing logs from another application (no documentation on the format) The log entries include data structures/objects and the files also have slight spontaneously corrupt entries because another thread occasionally writes to them without synchronizing access.
The structures have members of any base type, eg integer, string, sub-structure, in different quantities, eg 1, 0-1, 1 - N. It gets more complicated if you add the rules on valid combinations and valid sequences.
It might be easiest for me to define everything as an array with an associated counter variable.
I was motivated to ask about this because managing the initialization and checking if a variable has been read in already starts to get overwhelming.
The next stage - input validation - is even more difficult.
The problem is if the full range of the datatype is potentially valid input it gets very sticky trying to make a Sentinel.<
I would say that if that is the situation, there is no way to make a sentinel. You might get lucky if the data type in question has a trap representation (which essentially means that there are some bit patterns that you can store in the data type, but which are not interpretable as a value in the data type), which you could (ab)use.
Other than that, I think you need to resort to some secondary way (variable) to achieve your goal.
As a side note: Sometimes it is practical (but not safe) to reason about what values might be valid, but extremely unlikely input. You might use such a "special" value as a sentinel, but would have to provide some functionality to determine if, when encountering such a "special" value, it truly is a sentinel or a valid input.
Think of an array of doubles: You could use the value of PI to 30 significant digits, if it is highly unlikely that you would ever encounter that number as a valid input, let's say in an accounting software. But you would still need some handler for the sentinel value to determine if it truly is a sentinel, or, indeed, valid but improbable.
I am revising exam questions and I have come accross this one that I cannot solve (10.a).
Since I can't modify the array I know I cannot use Bubble Sort for instance, but the bit that is throwing me is the "not dependent on n", the only idea I can come up with would be to select the array[i] element and compare it to array[i+j] which I understand is not allowed since it would dependent on n.
We are several in our course scratching out heads at how should we approach this one, anybody could give us an idea on how to solve it?
For the second part we are OK since we have done a few search algorithm that could solve the question.
may use only a constant amount of additional space
This means that your algorithm is only allowed to use a fixed number of memory cells. However, it does not mean that you're forbidden to access memory holding the input array.
Note that the question is talking about constant space, not constant time.
A solution that compares every array[i] to array[i+j] is perfectly acceptable, since it only needs 1 additional memory cell (holding the result).
An abstract question, not related to any particular language:
If I have a function as follows
min(int, int) :: int
which returns the smallest value in an array, and
concat([int], [int]) :: [int]
which combines two arrays, how should I write a function like
minInArray([int]) :: Int
which returns the smallest element in an array, but where the ouput could be chained like so, even with an empty input array:
min(minInArray(array1), minInArray(array2)) == minInArray(concat(array1, array2))
In other words, is there any commonly-used neutral element which minInArray could return on empty input, which wouldn't mess up min()?
One option would be to return some neutral value like null or NaN if the array has no elements, and then if the min() function is run and one of the arguments is the neutral value, then you just return the min of the other array. Another option would be to return the closest value the language has to +Infinity if the array is empty; this works and does not require modifying min(), but does have the side effect of returning an infinite value sometimes when the minInArray() function is called. This infinite value could work as a truly neutral value that works with the default min() function, but it may cause some confusion if the minimum value in an array really is infinite.
minInArray(arr1) to return null if arr1 is empty.
min() should return only non-null values over null. Meaning min() will only return null if both parameters are null. Otherwise, it will return the minimum non-null value.
While thinking about the issue we've come to seemingly the only solution possible:
if an array is empty - we should return the maximum possible value for int to satisfy the condition.
Not that nice actually...
Just to add some perspectives (not that this is a duplicate of the listed questions) -
All of these throw errors of some kind when asked to calculate min of an empty list or array: Java, Scala, Python, numpy, Javascript, C#. Probably more, but that's as far as I looked. I'm sure there are some that don't, but I'd expect most of those to be languages which have traded understandability and clarity for speed.
This question is about a specific language, but has answers relevant to all languages.
Note here how one can get around the issue in something like Python.
For Haskell in particular, note the advice in this question.
And lastly here's a response for a more general case of your question.
In general, it is always most important for code to work, but a close second to that is it must be understandable to humans. Perhaps it doesn't matter for your current project, if you'll be the only one dealing with that function, but the last thing I'd expect when calling a 'get_minimum' function, is Int.MAX.
I understand it makes the coding simple, but I'd urge you to beware of code that is easy to write and tricky to understand. A little more time spent making the code easy to read, with as much as possible having an immediately obvious meaning, will always save much more time later on.
I'm familiar with the idea of a hash function but I'm unclear on how GLib's implementation is useful. I'll explain this with an example.
Suppose I have an expensive function that is recursive (somehow) on the positive real numbers in a weird way that depends on number theory (I'm a mathematician). Let's say I have an algorithm that needs to compute the function on some smallish-range of large numbers. Say [1000000000 - 1000999999].
I don't want to call my expensive function one million times, so I start memoizing values recursively. Then at each call I don't need to necessarily compute the whole function from scratch, I can hopefully remember any values of the function on the lower numbers (during my recursing) that I have already computed. Let's assume that the actual total number of calls at that first level of recursion is low. So that there are a lot of repeated values and memoizing actually saves you a lot of time.
This is my cartoony way of understanding why a hash table data structure is useful. What I don't get is how to do this without knowing exactly what keys I'll need in advance.
Since the recursive function is number theoretic in general I don't know which values it will take over and over again. So I'd like to just throw these in a bucket (hash table) as they pop out of recursive calls to my function
For GLib, it would seem that your (key,value) pairs are always pointers to data that you personally have to keep lying around somewhere. So if my function is computing for input x. I don't know how to tell if I've seen the value x before, the function g_hash_table_contains() for example needs a pointer, not the value x. So what's the use!?
I'm still learning so be kind. I'm familiar with coding in C, but haven't yet used hash tables in this language and I'm trying to do so and be adept at it with GLib but I just don't get this.
Let me take a dig at it to explain it.
First of all, if we are using hashmap, then we need [key, value] pair for sure as our input.
So as a user of hashmap, we have to be creative about choosing key, and it varies depending upon the usecase.
In your case, as far as I understood, you have a function which works on a range and gives you result. And when calculating, it uses memoization so that results of small problem, which constitutes the bigger problem, can be used.
So for example, your case, you can use string as your key where string will be [1000000009] which may use result of [1000999998] which may further use result of 1000999997 and so on, and you do not find results in hashmap, then you will calculate it and save it in hashmap.
In nutshell, as a user, we need to be creative about choosing keys.
The analogues to understand is how you would have done, if you have to think about choosing primary key of database.
Another example to think is how you would have thought about solving fibonacci(n) using the hashmap.
I have been given an assignment to write a program that reads in a number of assignment marks from a text file into an array, and then counts how many marks there are within particular brackets, i.e. 40-49, 50-59 etc. A value of -1 in the text file means that the assignment was not handed in, and a value of 0 means that the assignment was so bad that it was ungraded.
I could do this easily using a couple of for loops, and then using if statements to check the values whilst incrementing appropriate integers to count the number of occurences, but in order to get higher marks I need to implement the program in a "better" way. What would be a better, more efficient way to do this? I'm not looking for code right now, just simply "This is what you should do". I've tried to think of different ways to do it, but none of them seem to be better, and I feel as if I'm just trying to make it complicated for the sake of it.
I tried using the 2D array that the values are stored in as a parameter of a function, and then using the function to print out the number of occurences of the particular values, but I couldn't get this to compile as my syntax was wrong for using a 2D array as a parameter, and I'm not too sure about how to do this.
Any help would be appreciated, thanks.
Why do you need a couple for loops? one is enough.
Create an array of size 10 where array[0] is marks between 0-9, array[1] is marks between 10-19, etc. When you see a number, put it in the appropriate array bucket using integer division, e.g. array[(int)mark/10]++. when you finish the array will contain the count of the number of marks in each bucket.
As food for thought, if this is a school assignment, you might want to apply other things you have learned in the course.
Did you learn sorting yet? Maybe you could sort the list first so that you are not iterating over the array several times. You can just go over it once, grab all the -1's, and spit out how many you have, then grab all the ones in the next bracket and so on.
edit: that is of course, assuming that you are using a 1d array.