Ada aspects which are private to a package - package

Let's say I have the stupidest ring buffer in the world.
size: constant := 16;
subtype T is integer;
package RingBuffer is
procedure Push(value: T);
function Pop return T;
end;
package body RingBuffer is
buffer: array(0..size) of T;
readptr: integer := 0;
writeptr: integer := 1;
procedure Push(value: T) begin
buffer(writeptr) := value;
writeptr := (writeptr + 1) mod size;
end;
function Pop return T
begin
readptr := (readptr + 1) mod size;
return buffer(readptr);
end;
end;
Because my code sucks, I want to add preconditions and postconditions to make I don't misuse this sure. So I change the implementation of Push as follows:
procedure Push(value: T) with
pre => readptr /= writeptr
is begin
buffer(writeptr) := value;
writeptr := (writeptr + 1) mod size;
end;
However, I get a compile error because I need to put the aspect definitions in the declaration of the procedure, not in the implementation.
The thing is, this is a package. My declaration is public. The values that the precondition is depending on belong to the package body, which isn't visible to the declaration. In order to put the aspect definition in the declaration I'm going to have to refactor my code to expose implementation details into the public part of the package (in this case, readptr and writeptr). And I don't want to do that.
I can think of a few ways round this, such as having my implementation of Push() call a private PushImpl() procedure defined only in the body which actually has the precondition... but that's horrible. What's the right way to do this?

I think this is always going to be a problem when the validation checks are private, and that the solution is to declare a function to do the check:
package RingBuffer is
function Is_Full return Boolean;
procedure Push(value: T) with Pre => not Is_Full;
function Pop return T;
(Is_Full is probably useful anyway; in other cases it might not be so).
If you leave the implementation in the package body, you’ll need to put Is_Full there too, but you could move them to the spec and use an expression function:
package RingBuffer is
function Is_Full return Boolean;
procedure Push(value: T) with Pre => not Is_Full;
function Pop return T;
private
buffer: array(0..size) of T;
readptr: integer := 0;
writeptr: integer := 1;
function Is_Full return Boolean is (Readptr = Writeptr);
end RingBuffer;

The contract aspects are intended to be used in the public view of (sub)types and subprograms.
If you want to keep the check in the private view, then it is simple to write it as the first statement in the subprogram:
begin
if Is_Full then
raise Constraint_Error with "Ring buffer is full.";
end if;
...
Some unsolicited advise:
Make the contracts public, so the users of the package can see how it should be used.
Insert a similar Is_Empty check when popping an item from the buffer.
Make your index type modular: type Indexes is mod 16;

Related

Golang best practices: empty array response or error?

What are best practices in terms of error handling for a function that accepts slice of objects and returns another slice of objects (ideally of same length as input array) along with error as follows:
func ([]interface{}) ([]interface{}, error)
One way is whenever you get an error for processing one of the objects in a slice, you return an error response, but that way at the receiving function, if you don't discard all slice elements, error response becomes of little use merely telling us that processing of one of the elements or all failed. Another way is you return an error when none of the elements get processed but again this is of little use I feel. One more way is you don't include error as return object and instead with every slice element struct, have it's own error object as a composite so you can send elementwise error as output.
The best way obviously depends on the particular scenario, however, I want to know if there are any best practices people follow or any design patterns around this problem.
PS: This was one of the closest questions, however since its accepting single object as input, not very relevant:
Return empty array or error
... a function that accepts [slice of interface representing an] array of objects and returns another [slice of interface representing an] array of objects along with error ...
You have not told us enough to go on.
Does the returned slice actually have anything to do with the parameter slice?
If so, what relationship do they have? For instance, perhaps the returned slice should be half the size of the input slice, and an error occurs if and only if the number of input objects is odd, in which case the last input object has been ignored.
Must inputs be processed in order, or will they be processed in parallel?
One more way is you don't include error as return object and instead with every array object struct, have it's own error object as a composite so you can send elementwise error as output.
This is probably a wise approach if the outputs are one-to-one with the inputs and you intend to handle them in parallel and/or continue processing the remaining inputs upon reaching one bad one. Equivalently, you can have the output slice include an error.
It's really very problem-dependent.
Edit: consider, e.g., the following (which I don't claim is good, mind you):
const maxWorkers = 10 // tunable
// Process a slice of T's in parallel. The results are either an
// R for each T, or an error. Caller provides the actual function
// f(T), which returns R + error (an empty/zero R for error).
func ProcessInParallel(input []T, f func(T) (R, error)) ([]interface{}, error) {
// Make one channel for sending work to workers,
// and one for receiving results from workers.
type Todo struct {
i int // the index of the item
item T // the item to work on
}
workChan := make(chan Todo)
type Done struct {
i int // the index of the item worked on
r R // result, if we have one
e error // error, if we have one
}
doneChan := make(chan Done)
// Spin off workers: maxWorkers or len(input),
// whichever is smaller.
n := len(input)
if n > maxWorkers {
n = maxWorkers
}
var wg sync.WaitGroup
for i := 0; i < n; i++ {
wg.Add(1)
go func(i int) {
for todo := range workChan {
i := todo.i
r, err := f(input[i])
doneChan <- Done{i, r, err}
}
wg.Done()
}(i)
}
// Close doneChan when all workers finish.
go func() {
wg.Wait()
close(doneChan)
}()
// Hand out work to workers (then close work channel).
go func() {
for i := range input {
workChan <- Todo{i, input[i]}
}
close(workChan)
}()
// Collect results.
var anyErr error
ret := make([]interface{}, len(input))
for done := range doneChan {
i := done.i
r, err := done.r, done.e
if err != nil {
anyErr = err
ret[i] = err
} else {
ret[i] = r
}
}
return ret, anyErr
}
This has an overall error return, and it returns a slice of interface{}. This means you can immediately tell if everything worked. However, it's kind of annoying to use:
ret, err := ProcessInParallel(arg, f)
if err != nil {
fmt.Println("some inputs failed")
for i := range ret {
if e, ok := ret[i].(error); ok {
fmt.Printf("%d: failed: %v\n", i, e)
} else {
fmt.Printf("%d: %s\n", i, ret[i].(R))
}
}
} else {
fmt.Println("all inputs were good")
for i := range ret {
fmt.Printf("%d: %s\n", i, ret[i].(R))
}
}
Why bother with the all-error summary?
Instead, we could have ProcessInParallel return []R, []error, for instance, or—probably better—use a simple error interface return value to store a MultiError as Cerise Limón suggested in a comment:
ret, err := ProcessInParallel(arg, f)
if err != nil {
if merr, ok := err.(datastore.MultiError); ok {
// merr[i] indicates the various failed items
// any ret[i] for which merr[i] is nil is OK
}
} else {
// all ret[i] are ok
}
A working example that doesn't use MultiError is here.
A working example that does use MultiError is here.
While Go supports multiple return values, when one of the return types is an error, it is meant to process either error or the other return values and not both. It means that when error is not nil, the other return values has no specific meaning and should not be processed.
In your case, I'd personally prefer to use an iterator pattern, similar to what is implemented for database/sql.Rows, such that:
func X(values []interface{}) *Result
The Result would hold all processed slice elements associated with their errors. Somewhere in the code I would write something like this:
result := X(values)
for result.Next() {
if err := result.Err(); err != nil {
// Handle the err for this specific element.
// Whether continue or fail the whole process.
}
v := result.Cur()
// Process current element.
}

About Definition: Rewriting Algorithm from Go Code to C

Currently translating weighted DAG to C code which is written in Go language and topologically sorted. Actually I missed one part of the code that is the function below sample. I couldn't get what "visit" declaration is. Is it function declaration within another function ? If you explain in C syntax that would be great.
func (g *graph) topoSort() []int {
result := make([]int, g.size())
marks := make([]bool, g.size())
resultIndex := g.size() - 1
var visit func(int)
visit = func(u int) {
for _, item := range g.adjList[u] {
if !marks[item.vertex] {
visit(item.vertex)
}
}
marks[u] = true
result[resultIndex] = u
resultIndex--
}
for u := range g.adjList {
if !marks[u] {
visit(u)
}
}
return result
}
Yes, it's a local function definition, and it closes over marks, which makes it not worth translating directly. You can transform it to an ordinary static function if you also change it to take marks as an argument.

Delphi - TStack Capacity confusion

I have been working with TStack to try and implement a simple Undo/Redo feature in my program. The thought behind this is that as an action is performed the current state of the program is saved - i.e. pushed to the stack. When the user clicks undo the last state of the program is reloaded - i.e. popped from the stack.
The flaw in this idea is that the stack cannot go on growing forever meaning that after a capacity value is reached, the oldest items (those at the bottom of the stack) should be removed as new items are pushed on top.
The TStack object in Delphi contains a Capacity property which I assumed would automatically perform this 'clean-up' but when I overload the stack (e.g. push 11 items to one with capcity 10) the capacity updates to accomodate for more items.
Can anyone offer me any advice on how to use TStack more effectively in this instance? I understand that an alternative would be to use an array structure but I like the prospective ease of using stacks.
Regards
In reality the TStack is powered by a dynamic array.
If you really wanted to you could abuse this fact to have the stack remove items from the bottom, but now you'd be fighting the stack.
I think it's a better idea just to use a circular list.
A simple design might work like this.
type
TCircularList<T> = class(TObject);
private
FStorage: TList<T>;
FCapacity: cardinal; //duplication for performance reasons
FCurrentIndex: cardinal;
protected
function GetItem(index: cardinal): T;
procedure SetItem(index: cardinal; const Value: T);
public
constructor Create(Size: cardinal = 10);
destructor Destroy; override;
procedure Add(const Item: T);
property Items[index: cardinal]: T read GetItem write SetItem; default;
end;
constructor TCircularList<T>.Create(Size: cardinal = 10);
begin
inherited Create;
Assert(Size >= 2);
FStorage:= TList<T>.Create;
FCapacity:= Size;
FStorage.Capacity:= Size;
end;
destructor TCircularList<T>.Destroy;
begin
FStorage.Free;
inherited;
end;
procedure TCircularList<T>.Add(const Item: T);
begin
FCurrentIndex:= (FCurrentIndex + 1) mod FCapacity;
FStorage[FCurrentIndex]:= Item;
end;
function TCircularList<T>.GetItem(index: cardinal): T;
var
cIndex: cardinal;
begin
cIndex:= Index mod FCapacity;
Result:= FStorage[index];
end;
procedure TCircularList<T>.SetItem(index: cardinal; const Value: T);
var
cIndex: cardinal;
begin
cIndex:= index mod FCapacity;
FStorage[index]:= Value;
end;
Obviously you'd need a few more methods like a last and delete method, but I leave that up to you, you should be able to extrapolate from here.
Usability remarks
I have to say that from a UX perspective I think the idea of an undo/redo function sucks.
Why not have a list of snapshots, you that you can go backwards and forwards in time, much like the situation would be if you saved a number of backup files on disk.
The undo function requires you to memorize exactly what you've done the last x steps which does not scale very well.
I also don't understand why there must be a limit, why not allow as much undo/snapshots as memory/disk space permits?

What are the sign extension rules for calling Windows API functions (stdcall)? This is needed to call WInAPI from Go, which is strict about int types

Oops, there was one thing I forgot when I made this answer, and it's something that I'm both not quite sure on myself and that I can't seem to find information for on MSDN and Google and the Stack Overflow search.
There are a number of places in the Windows API where you use a negative number, or a number too large to fit in a signed integer; for instance, CW_USEDEFAULT, INVALID_HANDLE_VALUE, GWLP_USERDATA, and so on. In the world of C, everything is all fine and dandy: the language's integer promotion rules come to the rescue.
But in Go, I have to pass all my arguments to functions as uintptr (which is equivalent to C's uintptr_t). The return value from the function is also returned this way, and then I will need to compare. Go doesn't allow integer promotion, and it doesn't allow you to convert a signed constant expression into an unsigned one at compile-time.
Right now, I have a bit of a jerry-rig set up for handling these constants in my UI library. (Here's an example of what this solution looks like in action.) However, I'm not quite satisfied with this solution; it feels to me like it's assuming things about the ABI, and I want to be absolutely sure of what I'm doing.
So my question is: how are signed values handled when passing them to Windows API functions and how are they handled when returning?
All my constants are autogenerated (example output). The autogenerator uses a C ffi, which I'd rather not use for the main project since I can call the DLLs directly (this also makes cross-compilation easier at least for the rest of the year). If I could somehow leverage that, for instance by making everything into a C-side variable of the form
uintptr_t x_CONST_NAME = (uintptr_t) (CONST_NAME);
that would be helpful. But I can't do that without this answer.
Thanks!
Update
Someone on IRC put it differently (reformatted to avoid horizontal scrolling):
[19:13] <FraGag> basically, you're asking whether an int with a value of -1
will be returned as 0x00000000FFFFFFFF or as 0xFFFFFFFFFFFFFFFF
if an int is 4 bytes and an uintptr is 8 bytes
Basically this, but specifically for Windows API interop, for parameters passed in, and regardless of uintptr size.
#twotwotwo's comments to my question pointed me in the right direction. If Stack Overflow allowed marking comments as answers and having multiple answers marked, I'd do that.
tl;dr version: what I have now is correct after all.
I wrote a program (below) that simply dumped all the constants from package syscall and looked for constants that were negative, but not == -1 (as that would just be ^0). The standard file handles (STD_ERROR_HANDLE, STD_INPUT_HANDLE, and STD_OUTPUT_HANDLE) are (-12, -10, and -11, respectively). The code in package syscall passes these constants as the sole argument of getStdHandle(h int), which produces the required file handle for package os. getStdHandle() passes this int to an autogenerated function GetStdHandle(stdhandle int) that wraps a call to the GetStdHandle() system call. GetStdHandle() takes the int and merely converts it to uintptr for passing into syscall.Syscall(). Though no explanation is given in the autogenerator's source (mksyscall_windows.go), if this didn't work, neither would fmt.Println() =P
All of the above is identical on both windows/386 and windows/amd64; the only thing in a processor-specific file is GetStdHandle(), but the relevant code is identical.
My negConst() function is already doing the same thing, just more directly. As such, I can safely assume that it is correct.
Thanks!
// 4 june 2014
// based on code from 24 may 2014
package main
import (
"fmt"
"os"
"strings"
"go/token"
"go/ast"
"go/parser"
"code.google.com/p/go.tools/go/types"
_ "code.google.com/p/go.tools/go/gcimporter"
)
var arch string
func getPackage(path string) (typespkg *types.Package, pkginfo types.Info) {
var pkg *ast.Package
fileset := token.NewFileSet() // parser.ParseDir() actually writes to this; not sure why it doesn't return one instead
filter := func(i os.FileInfo) bool {
if strings.Contains(i.Name(), "_windows") &&
strings.Contains(i.Name(), "_" + arch) &&
strings.HasSuffix(i.Name(), ".go") {
return true
}
if i.Name() == "race.go" || // skip these
i.Name() == "flock.go" {
return false
}
return strings.HasSuffix(i.Name(), "_windows.go") ||
(!strings.Contains(i.Name(), "_"))
}
pkgs, err := parser.ParseDir(fileset, path, filter, parser.AllErrors)
if err != nil {
panic(err)
}
for k, _ := range pkgs { // get the sole key
if pkgs[k].Name == "syscall" {
pkg = pkgs[k]
break
}
}
if pkg == nil {
panic("package syscall not found")
}
// we can't pass pkg.Files directly to types.Check() because the former is a map and the latter is a slice
ff := make([]*ast.File, 0, len(pkg.Files))
for _, v := range pkg.Files {
ff = append(ff, v)
}
// if we don't make() each map, package types won't fill the structure
pkginfo.Defs = make(map[*ast.Ident]types.Object)
pkginfo.Scopes = make(map[ast.Node]*types.Scope)
typespkg, err = new(types.Config).Check(path, fileset, ff, &pkginfo)
if err != nil {
panic(err)
}
return typespkg, pkginfo
}
func main() {
pkgpath := "/home/pietro/go/src/pkg/syscall"
arch = os.Args[1]
pkg, _ := getPackage(pkgpath)
scope := pkg.Scope()
for _, name := range scope.Names() {
obj := scope.Lookup(name)
if obj == nil {
panic(fmt.Errorf("nil object %q from scope %v", name, scope))
}
if !obj.Exported() { // exported names only
continue
}
if _, ok := obj.(*types.Const); ok {
fmt.Printf("egrep -rh '#define[ ]+%s' ~/winshare/Include/ 2>/dev/null\n", obj.Name())
}
// otherwise skip
}
}

Convert JIntArray to Array of Object

function Java_com_erm_controller_ARMReports_S35(PEnv: PJNIEnv; Obj: JObject; ex_UserRowID, ex_BSID : Integer; ex_RevalDate : JString;
ex_AFS, ex_HTM, ex_HFT : Boolean;
ex_IsMcCaulay_PNL: Boolean;
ex_Maturity, ex_Scale : JIntArray
): Integer; stdcall; export;
var objRpt : TARMReports;
I : Integer;
Len : JInt; //just a renamed delphi integer
aMaturity:array of Integer;
aScale:array of Integer;
begin
DLLErrorLog('CASH -S35');
objRpt := TARMReports.Create; JVM := TJNIEnv.Create(PEnv); ex_RevalDate_J := JVM.JStringToString(ex_RevalDate);
Len:=PEnv^.GetArrayLength(PEnv, ex_Maturity);
SetLength(aMaturity, Len);
Len:=PEnv^.GetArrayLength(PEnv, ex_Scale);
SetLength(aScale, Len);
DLLErrorLog('ex_Maturity Length'+ intToStr(Len));
for I := 0 to Len-1 do
begin
PEnv^.GetIntArrayRegion(PEnv, ex_Maturity, I, Len, #aMaturity[I]);
DLLErrorLog('ex_Maturity '+ IntToStr(aMaturity[I]));
PEnv^.GetIntArrayRegion(PEnv, ex_Scale, I, Len, #aScale[I]);
DLLErrorLog('ex_Scale '+ IntToStr(aScale[I]));
end;
Result := objRpt.S35(ex_UserRowID, ex_BSID, ex_RevalDate_J,
ex_AFS, ex_HTM, ex_HFT ,
ex_IsMcCaulay_PNL,
aMaturity, aScale
);
DLLErrorLog('CASH2 Ends -S35');
JVM.Free; objRpt.Free;
end;
Need to convert ex_Maturity, ex_Scale to objects to Delphi's Array of Integer.
Now while calling from Java it throws java.lang.ArrayIndexOutOfBoundsException
While printing in Log array values are getting . Please suggest us to work for me.
There are a couple of ways, depending on what, exactly your JIntArray is.
Firstly, if its an array of int (as in the primitive java type) then a get the length of the array via JNI, allocate a delphi array of integers and then get JNI to copy the data from the java array
Uses
AndroidAPI.JNI;
Var
Len:JNIInt; //just a renamed delphi integer
aMaturity:array of integer;
begin
Len:=PEnv^.GetArrayLength(PEnv, ex_Maturity);
//allocate the receiving array
SetLength(aMaturity, Len);
//now get the array data - note we are passing the address of the first element
//not the address of the array itself!
PEnv^.GetIntArrayRegion(PEnv, ex_Maturity, 0, Len, #aMaturity[0]);
//do stuff
end;
If you are dealing with an array of Integer (thats the Java class "Integer") then you need to get the array of objects from JNI one element at a time and use TJNIResolver to get the raw value;
Uses
AndroidAPI.JNI, AndroidAPI.JNIBridge;
Var
Len:JNIInt; //just a renamed delphi integer
Count:Integer;
Current:JNIObject;
CurrentValue:integer;
aMaturity:array of integer;
begin
Len:=PEnv^.GetArrayLength(PEnv, ex_Maturity);
//allocate the receiving array
SetLength(aMaturity, Len);
For Count:=0 to Len-1 do
begin
Current:=PEnv^.GetObjectArrayElement(PEnv, ex_Maturity, Count);
if assigned(Current) then
begin
CurrentValue:=TJNIResolver.GetRawValueFromJInteger(Current);
//Yes, you can inline this but the point is, here you do stuff with
//the element
aMaturity[Count]:=CurrentValue;
end;
end;
end;
Obviously the first method is much faster as crossing the JNI barrier is slow and you are only doing it once, whereas with the array of Java Integers you are doing it multiple times for each element.
You should also watch out for errors - I'm not checking for Java exceptions at any point which could crash and burn your app if you don't deal with them.
Edit : The OP has ready my answer and tried to work with it, which is nice. They have gotten a out of bounds exception in their code.
function Java_com_erm_controller_ARMReports_S35(PEnv: PJNIEnv; Obj: JObject; ex_UserRowID, ex_BSID : Integer; ex_RevalDate : JString;
ex_AFS, ex_HTM, ex_HFT : Boolean;
ex_IsMcCaulay_PNL: Boolean;
ex_Maturity, ex_Scale : JIntArray
): Integer; stdcall; export;
var objRpt : TARMReports;
I : Integer;
Len : JInt; //just a renamed delphi integer
aMaturity:array of Integer;
aScale:array of Integer;
begin
DLLErrorLog('CASH -S35');
objRpt := TARMReports.Create; JVM := TJNIEnv.Create(PEnv); ex_RevalDate_J := JVM.JStringToString(ex_RevalDate);
//you only have 1 length defined and possibly different array lengths
//process arrays seperately
Len:=PEnv^.GetArrayLength(PEnv, ex_Maturity);
SetLength(aMaturity, Len);
DLLErrorLog('ex_Maturity Length'+ intToStr(Len));
//only call this once, also watch the parameters you are passing in
PEnv^.GetIntArrayRegion(PEnv, ex_Maturity, 0, Len, #aMaturity[0]);
Len:=PEnv^.GetArrayLength(PEnv, ex_Scale);
SetLength(aScale, Len);
DLLErrorLog('ex_Scale Length'+ intToStr(Len));
PEnv^.GetIntArrayRegion(PEnv, ex_Scale, 0, Len, #aScale[0]);
Result := objRpt.S35(ex_UserRowID, ex_BSID, ex_RevalDate_J,
ex_AFS, ex_HTM, ex_HFT ,
ex_IsMcCaulay_PNL,
aMaturity, aScale
);
DLLErrorLog('CASH2 Ends -S35');
JVM.Free; objRpt.Free;
end;
What you were doing was getting the length twice, setting the delphi arrays correctly but then looping over them both in the same loop not taking into account that they could be different lengths. Your call to getinarrayregion was also passing the complete length in for aScale on the second parameter for both calls - if you really want to get each one in a loop like that then you need to pass the count and a length of 1 to only return 1 element - this is most likely what was causing the exception.
If you want to report the contents then create a procedure to do it, rather than using a loop inside your current procedure, you would have to copy and paste the loop to do it otherwise which is, franky bad coding practice and we don't want that now do we?
Sarcasm on
Not that expecting someone who has tried to help you to correct the your code rather than actually understanding the problem is any better, but ho hum.
Sarcasm off

Resources