TDataset --> Pointer to rows cols matrix? - c

I am writing a windows application in Lazarus/FreePascal (like Delphi). I have a TDataset object that is populated by 5000 rows, 2 columns of numerical values.
I need to pass this data to a C function that I am importing statically from a .dll library.
Here is an excerpt from the library's manual that explains what format its parameters should be in:
flan_index_t flann_build_index(float* dataset,
int rows,
int cols,
float* speedup,
struct FLANNParameters* flann_params);
This function builds an index and return a reference to it. The
arguments expected by this function are as follows: dataset, rows and
cols - are used to specify the input dataset of points:
dataset is a pointer to a rows cols matrix stored in row-major
order (one feature on each row)
Can I simply pass the TDataSet object? Do I have to do something to it first so that the pointer is in the right form?

Obviously you can't pass the TDataSet object. It is a FreePascal object, and the function seems to expect a pointer to a float (which is probably a pointer to a Single in FreePascal). It probably expects a two-dimensional array of float's. You have to pass another pointer to float and a pointer to a FLANNParameters structure as well.
Move() won't work either. A TDataSet is not an array.
I guess you'll have to declare an array like Uwe said, populate it using your dataset and pass the array:
type
PMyFloatArray = ^TFloatArray;
TMyFloatArray = array[0..4999, 0..1] of Single;
var
MyArray: PMyFloatArray;
idx: flan_index_t;
begin
New(MyArray);
try
// Fill array using your TDataSet...
// set up other parameters...
idx := flann_build_index(MyArray, 5000, 2, &speedup, etc...);
// ...
finally
Dispose(MyArray);
end;
end;
Shameless plug
Please read my Pitfalls of Conversion article about converting function declarations from C to Delphi (and probably FreePascal on Win32). Now I'm at it, you may want to read my article Addressing Pointers too.

No, you can't pass the dataset directly. The naming "dataset" might imply that, but the meaning is totally different. You have to pass a pointer to a float matrix to the function. To implement this you should declare an array[0..4999, 0..1] of float (probably double) and fill it from the dataset.

Using Rudy's solution as a base (Thanks by the way!) I came up with this:
with Datasource1.DataSet do
begin
Open;
First;
field_count := FieldCount;
record_count := RecordCount;
row := 0;
while not EOF do
begin
for col := 0 to field_count - 1 do
MyArray[row, col] := Fields.Fields[col].AsFloat;
row := row + 1; //Shift to next row in array
Next; //Shift to next row in dataset
end;
end;
Seems to work great; and a lot faster than I expected.

Related

Function to average elements of any array in Delphi

How can we implement a function to average all elements of a dynamic multidimensional array in Delphi 6? Such as:
Function Average(arr:any_array):extended;
Where arr is a dynamic array and may have 1 or more dimensions.
Linked question: how can we linearize a multidimension array? Do you have any example?
I ask this because I have many different arrays which I must average, if possible, with the one same function.
If you have one dimensional array you can simply use Mean function located in Math unit.
But for multidimensional array I'm afraid there is no already available function. The main reason is that Delphi treats multidimensional arrays in a rather specific way.
While in many other programming languages multidimensional array are in fact just one dimensional array with modified mechanism for accessing array elements which works something like this:
Actual Array Item Index = First parameter + (Second parameter * Size of first dimension)
In Delphi multidimensional arrays are represented as multiple one dimensional array referenced by another array like so:
Array of One Dimensional Array
EDIT 2: As mentioned by David in his comment these type of arrays are also known as Jagged arrays or Ragged arrays.
Why is this so different? Why is it so important? Because those One Dimensional Arrays that from multidimensional array in Delphi don't need to be of the same size. So data in your multidimensional array can actually look like this (where each X represents one array item):
XXXX
XXXXXXX
XXXX
XXXXX
So I'm afraid you will have to write your own function for this. But that should not be so hard.
All you need is a mechanism to iterate through every item in your arrays (I bet you have this already implemented in your code somewhere) so you can sum up the value of all elements in your arrays. And then you divide that sum with the number of elements in your arrays in order to get the arithmetical average of all items.
EDIT 3: I'm not sure if you could have single code to iterate through all of your dynamic arrays. I mean at least you need to know how many dimensions does your array have. This is required because for iterating through multiple dimensions of an array you actually need to recursively iterate through each separate array that represents separate dimensions.
The code for summing up elements in a 3D dynamic array would look like this:
type
TDyn3Darray = array of array of array of Single;
implementation
function SumOf3DArray(Dyn3DArray: TDyn3DArray): Single;
var X,Y,Z, a: Integer;
ArrItem: Single;
begin
Result := 0;
SetLength(Dyn3Darray,5,8,40);
for X := Low(Dyn3Darray) to High(Dyn3Darray) do
begin
for Y := Low(Dyn3Darray[X]) to High(Dyn3Darray[X]) do
begin
for Z := Low(Dyn3Darray[X,Y]) to High(Dyn3Darray[X,Y]) do
begin
Result := Result + Dyn3Darray[X,Y,Z];
end;
end;
end;
end;
I believe you will be capable to modify the code accordingly for different number of dimensions yourself.
EDIT 1: As for linearizing of multidimensional arrays in which I believe you mean changing the multidimensional array in one dimensional array you would just copy elements of separate dimensions and add them into one dimensional array.
But doing that just in order to get average value of all elements would not be practical because for first you would need double the amount of memory and as second in order to copy all the elements from multiple dimension into one you would probably have to iterate through every element in the first place. And then whatever method would you use for calculating average value would also have to iterate through all those copied elements again and thus just slowing down the process for no good reason.
And if you are interested in replacing multidimensional arrays with one dimensional arrays while still threating its elements as if they are in multidimensional array check my explanation of how some other programming languages treat multidimensional array.
NOTE that this would require you accessing all items of such arrays through special methods and will thus result in large changes of your code where you interact with these arrays. Unless if you would wrap these arrays in a class with default multi indexed property that will allow you accessing the array items. Such approach would require additional code for initialization and finalization of such classes and require a slightly modified approach when resizing of such arrays.
Delphi does not provide easy ways for you to inspect the dimensions of an arbitrary dynamic array. Indeed it provides no means for you to pass around an arbitrary dynamic array. However, in practice, you will encounter one and two dimensional arrays commonly, and seldom anything of higher dimensions. So you need to write only a handful of functions:
type
TDoubleArray1 = array of Double;
TDoubleArray2 = array of TDoubleArray1;
TDoubleArray3 = array of TDoubleArray2;
function Mean(const arr: TDoubleArray1): Double; overload;
function Mean(const arr: TDoubleArray2): Double; overload;
function Mean(const arr: TDoubleArray3): Double; overload;
Implement like this:
function Mean(const arr: TDoubleArray1): Double;
var
i: Integer;
begin
Result := 0.0;
for i := low(arr) to high(arr) do
Result := Result + arr[i];
Result := Result / Length(arr);
end;
function Mean(const arr: TDoubleArray2): Double;
var
i, j, N: Integer;
begin
Result := 0.0;
N := 0;
for i := low(arr) to high(arr) do
for j := low(arr[i]) to high(arr[i]) do
begin
Result := Result + arr[i,j];
inc(N);
end;
Result := Result / N;
end;
function Mean(const arr: TDoubleArray3): Double;
var
i, j, k, N: Integer;
begin
Result := 0.0;
N := 0;
for i := low(arr) to high(arr) do
for j := low(arr[i]) to high(arr[i]) do
for k := low(arr[i,j]) to high(arr[i,j]) do
begin
Result := Result + arr[i,j,k];
inc(N);
end;
Result := Result / N;
end;
It would astound me if you really needed higher dimensions than this, but it is easy to add.
Some points:
I'm using Double rather than Extended for reasons of performance, as the latter has a size of 10 which leads to a lot of mis-aligned memory access. If you cannot bring yourself to avoid Extended you can readily adapt the code above.
You could probably find low-level methods using RTTI to write a single function that iterates over any dynamic array, however, the RTTI would likely impact performance. I personally don't see any point in going that route. Use a handful of functions like this and be done.

Returning Excel Array from Delphi

I'm trying to call a Delphi DLL from Excel and return a column of variant data values. I have the dll returning a single shortstring, and that appears in a cell ok. Now I am trying to return a column of variable values. It gets into my code ok, but the array in excel is all 0.
Any ideas greatly appreciated.
Here is the macro registration in Excel: =REGISTER("c:\projects\test\delphixl.dll","GetPolicyData","KDD","GetPolicyData","Co,Pol",1,"Delphi")
I'm not sure what the 1 is after the parameters; I cant find full documentation anywhere.
the range in Excel has: {=GetPolicyData(C1,D1)}
Below is the code in D7. According to the excel doc for the register functions, this appears to be OK.
K Data Type
The K data type uses a pointer to a variable-size FP structure. You must define this structure in the DLL or code resource as follows:
typedef struct _FP
{
unsigned short int rows;
unsigned short int columns;
double array[1]; /* Actually, array[rows][columns] */
} FP;
Type
pPolicyData = ^tPolicyData;
tPolicyData = Record
Rows: word;
Cols: word;
data: variant;
End;
var
pd: tPolicyData;
Function GetPolicyData(co: pShortString; pol: pShortString): pPolicyData; Stdcall;
Var
polc: tpolcmst;
Begin
lpro := tlifepro.create;
lpro.opendatabases;
polc := tpolcmst.create(lpro);
Try
polc.read(co^, pol^);
pd.Rows := 2;
pd.Cols := 1;
pd.data := VarArrayCreate([0, 1], varVariant);
pd.data[0] := datetostr(polc.issuedate);
pd.data[1] := format('%.2f', [polc.modepremium]);
result := addr(pd);
Finally
lpro.closedatabases;
freeit(lpro);
End;
End;
You have declared the data member of the FP record (TPolicyData) as a Variant. It should be declared Double, and it should be an array of the required dimensions.
In this case, with Rows = 2 and Cols = 1 :
TPolicyData = record
Rows: word;
Cols: word;
data: array[0..1] of Double;
End;
If the Rows and Cols are dynamic then you will need to calculate the memory required for the FP record and allocate this dynamically. The required memory will be:
2 bytes for "Rows" word
+ 2 bytes for "Cols" word
+ Rows x Cols x 8 bytes for "data" array of Doubles
With Delphi 7 you will need to use pointer offsets into the data memory to assign the values to the individual Double elements in the "array".
However, since you have not mentioned the need to address variable rows and columns, I shall not go into further detail on that as it may be superfluous.

Why is PostgreSQL array access so much faster in C than in PL/pgSQL?

I have a table schema which includes an int array column, and a custom aggregate function which sums the array contents. In other words, given the following:
CREATE TABLE foo (stuff INT[]);
INSERT INTO foo VALUES ({ 1, 2, 3 });
INSERT INTO foo VALUES ({ 4, 5, 6 });
I need a "sum" function that would return { 5, 7, 9 }. The PL/pgSQL version, which works correctly, is as follows:
CREATE OR REPLACE FUNCTION array_add(array1 int[], array2 int[]) RETURNS int[] AS $$
DECLARE
result int[] := ARRAY[]::integer[];
l int;
BEGIN
---
--- First check if either input is NULL, and return the other if it is
---
IF array1 IS NULL OR array1 = '{}' THEN
RETURN array2;
ELSEIF array2 IS NULL OR array2 = '{}' THEN
RETURN array1;
END IF;
l := array_upper(array2, 1);
SELECT array_agg(array1[i] + array2[i]) FROM generate_series(1, l) i INTO result;
RETURN result;
END;
$$ LANGUAGE plpgsql;
Coupled with:
CREATE AGGREGATE sum (int[])
(
sfunc = array_add,
stype = int[]
);
With a data set of about 150,000 rows, SELECT SUM(stuff) takes over 15 seconds to complete.
I then re-wrote this function in C, as follows:
#include <postgres.h>
#include <fmgr.h>
#include <utils/array.h>
Datum array_add(PG_FUNCTION_ARGS);
PG_FUNCTION_INFO_V1(array_add);
/**
* Returns the sum of two int arrays.
*/
Datum
array_add(PG_FUNCTION_ARGS)
{
// The formal PostgreSQL array objects:
ArrayType *array1, *array2;
// The array element types (should always be INT4OID):
Oid arrayElementType1, arrayElementType2;
// The array element type widths (should always be 4):
int16 arrayElementTypeWidth1, arrayElementTypeWidth2;
// The array element type "is passed by value" flags (not used, should always be true):
bool arrayElementTypeByValue1, arrayElementTypeByValue2;
// The array element type alignment codes (not used):
char arrayElementTypeAlignmentCode1, arrayElementTypeAlignmentCode2;
// The array contents, as PostgreSQL "datum" objects:
Datum *arrayContent1, *arrayContent2;
// List of "is null" flags for the array contents:
bool *arrayNullFlags1, *arrayNullFlags2;
// The size of each array:
int arrayLength1, arrayLength2;
Datum* sumContent;
int i;
ArrayType* resultArray;
// Extract the PostgreSQL arrays from the parameters passed to this function call.
array1 = PG_GETARG_ARRAYTYPE_P(0);
array2 = PG_GETARG_ARRAYTYPE_P(1);
// Determine the array element types.
arrayElementType1 = ARR_ELEMTYPE(array1);
get_typlenbyvalalign(arrayElementType1, &arrayElementTypeWidth1, &arrayElementTypeByValue1, &arrayElementTypeAlignmentCode1);
arrayElementType2 = ARR_ELEMTYPE(array2);
get_typlenbyvalalign(arrayElementType2, &arrayElementTypeWidth2, &arrayElementTypeByValue2, &arrayElementTypeAlignmentCode2);
// Extract the array contents (as Datum objects).
deconstruct_array(array1, arrayElementType1, arrayElementTypeWidth1, arrayElementTypeByValue1, arrayElementTypeAlignmentCode1,
&arrayContent1, &arrayNullFlags1, &arrayLength1);
deconstruct_array(array2, arrayElementType2, arrayElementTypeWidth2, arrayElementTypeByValue2, arrayElementTypeAlignmentCode2,
&arrayContent2, &arrayNullFlags2, &arrayLength2);
// Create a new array of sum results (as Datum objects).
sumContent = palloc(sizeof(Datum) * arrayLength1);
// Generate the sums.
for (i = 0; i < arrayLength1; i++)
{
sumContent[i] = arrayContent1[i] + arrayContent2[i];
}
// Wrap the sums in a new PostgreSQL array object.
resultArray = construct_array(sumContent, arrayLength1, arrayElementType1, arrayElementTypeWidth1, arrayElementTypeByValue1, arrayElementTypeAlignmentCode1);
// Return the final PostgreSQL array object.
PG_RETURN_ARRAYTYPE_P(resultArray);
}
This version takes only 800 ms to complete, which is.... much better.
(Converted to a stand-alone extension here: https://github.com/ringerc/scrapcode/tree/master/postgresql/array_sum)
My question is, why is the C version so much faster? I expected an improvement, but 20x seems a bit much. What's going on? Is there something inherently slow about accessing arrays in PL/pgSQL?
I'm running PostgreSQL 9.0.2, on Fedora Core 8 64-bit. The machine is a High-Memory Quadruple Extra-Large EC2 instance.
Why?
why is the C version so much faster?
A PostgreSQL array is its self a pretty inefficient data structure. It can contain any data type and it's capable of being multi-dimensional, so lots of optimisations are just not possible. However, as you've seen it's possible to work with the same array much faster in C.
That's because array access in C can avoid a lot of the repeated work involved in PL/PgSQL array access. Just take a look at src/backend/utils/adt/arrayfuncs.c, array_ref. Now look at how it's invoked from src/backend/executor/execQual.c in ExecEvalArrayRef. Which runs for each individual array access from PL/PgSQL, as you can see by attaching gdb to the pid found from select pg_backend_pid(), setting a breakpoint at ExecEvalArrayRef, continuing, and running your function.
More importantly, in PL/PgSQL every statement you execute is run through the query executor machinery. This makes small, cheap statements fairly slow even allowing for the fact that they're pre-prepared. Something like:
a := b + c
is actually executed by PL/PgSQL more like:
SELECT b + c INTO a;
You can observe this if you turn debug levels high enough, attach a debugger and break at a suitable point, or use the auto_explain module with nested statement analysis. To give you an idea of how much overhead this imposes when you're running lots of tiny simple statements (like array accesses), take a look at this example backtrace and my notes on it.
There is also a significant start-up overhead to each PL/PgSQL function invocation. It isn't huge, but it's enough to add up when it's being used as an aggregate.
A faster approach in C
In your case I would probably do it in C, as you have done, but I'd avoid copying the array when called as an aggregate. You can check for whether it's being invoked in aggregate context:
if (AggCheckCallContext(fcinfo, NULL))
and if so, use the original value as a mutable placeholder, modifying it then returning it instead of allocating a new one. I'll write a demo to verify that this is possible with arrays shortly... (update) or not-so-shortly, I forgot how absolute horrible working with PostgreSQL arrays in C is. Here we go:
// append to contrib/intarray/_int_op.c
PG_FUNCTION_INFO_V1(add_intarray_cols);
Datum add_intarray_cols(PG_FUNCTION_ARGS);
Datum
add_intarray_cols(PG_FUNCTION_ARGS)
{
ArrayType *a,
*b;
int i, n;
int *da,
*db;
if (PG_ARGISNULL(1))
ereport(ERROR, (errmsg("Second operand must be non-null")));
b = PG_GETARG_ARRAYTYPE_P(1);
CHECKARRVALID(b);
if (AggCheckCallContext(fcinfo, NULL))
{
// Called in aggregate context...
if (PG_ARGISNULL(0))
// ... for the first time in a run, so the state in the 1st
// argument is null. Create a state-holder array by copying the
// second input array and return it.
PG_RETURN_POINTER(copy_intArrayType(b));
else
// ... for a later invocation in the same run, so we'll modify
// the state array directly.
a = PG_GETARG_ARRAYTYPE_P(0);
}
else
{
// Not in aggregate context
if (PG_ARGISNULL(0))
ereport(ERROR, (errmsg("First operand must be non-null")));
// Copy 'a' for our result. We'll then add 'b' to it.
a = PG_GETARG_ARRAYTYPE_P_COPY(0);
CHECKARRVALID(a);
}
// This requirement could probably be lifted pretty easily:
if (ARR_NDIM(a) != 1 || ARR_NDIM(b) != 1)
ereport(ERROR, (errmsg("One-dimesional arrays are required")));
// ... as could this by assuming the un-even ends are zero, but it'd be a
// little ickier.
n = (ARR_DIMS(a))[0];
if (n != (ARR_DIMS(b))[0])
ereport(ERROR, (errmsg("Arrays are of different lengths")));
da = ARRPTR(a);
db = ARRPTR(b);
for (i = 0; i < n; i++)
{
// Fails to check for integer overflow. You should add that.
*da = *da + *db;
da++;
db++;
}
PG_RETURN_POINTER(a);
}
and append this to contrib/intarray/intarray--1.0.sql:
CREATE FUNCTION add_intarray_cols(_int4, _int4) RETURNS _int4
AS 'MODULE_PATHNAME'
LANGUAGE C IMMUTABLE;
CREATE AGGREGATE sum_intarray_cols(_int4) (sfunc = add_intarray_cols, stype=_int4);
(more correctly you'd create intarray--1.1.sql and intarray--1.0--1.1.sql and update intarray.control. This is just a quick hack.)
Use:
make USE_PGXS=1
make USE_PGXS=1 install
to compile and install.
Now DROP EXTENSION intarray; (if you already have it) and CREATE EXTENSION intarray;.
You'll now have the aggregate function sum_intarray_cols available to you (like your sum(int4[]), as well as the two-operand add_intarray_cols (like your array_add).
By specializing in integer arrays a whole bunch of complexity goes away. A bunch of copying is avoided in the aggregate case, since we can safely modify the "state" array (the first argument) in-place. To keep things consistent, in the case of non-aggregate invocation we get a copy of the first argument so we can still work with it in-place and return it.
This approach could be generalised to support any data type by using the fmgr cache to look up the add function for the type(s) of interest, etc. I'm not particularly interested in doing that, so if you need it (say, to sum columns of NUMERIC arrays) then ... have fun.
Similarly, if you need to handle dissimilar array lengths, you can probably work out what to do from the above.
PL/pgSQL excels as server-side glue for SQL elements. Procedural elements and lots of assignments are not among its strengths. Assignments, tests or looping are comparatively expensive and only warranted if they help to take shortcuts one could not achieve with just SQL. The same logic implemented in C will always be faster, but you seem to be well aware of that ...
Typically, a pure SQL solution is faster. Compare this simple, equivalent solution in your test setup:
SELECT array_agg(a + b)
FROM (
SELECT unnest('{1, 2, 3 }'::int[]) AS a
, unnest('{4, 5, 6 }'::int[]) AS b
) x;
You can wrap it into a simple SQL function. Or, for best performance, integrate it into your big query directly:
SELECT tbl_id, array_agg(a + b)
FROM (
SELECT tbl_id
, unnest(array1) AS a
, unnest(array2) AS b
FROM tbl
ORDER BY tbl_id
) x
GROUP BY tbl_id;
Set-returning functions only work in parallel in a SELECT if the number of returned rows is identical. I.e.: works only for arrays of equal length. The behavior has eventually been sanitized in Postgres 10. See:
What is the expected behaviour for multiple set-returning functions in SELECT clause?
It's generally best to test with a current version of Postgres. At least update to the latest point release (9.0.15 at the time of writing). Might be part of the explanation for the big difference in performance.
Postgres 9.4
Performance improvements for array handling.
There is a cleaner solution for unnesting in parallel now:
Unnest multiple arrays in parallel

How to pass array of shortstring to a method

I would like to make a procedure that take array of shortstring as argument
procedure f(const a, b: Array of shortstring);
I would like to call this with arrays of known length and shortstrings of known length e.g.
var
A, B: array[1..2] of string[5];
C, D: array[1..40] of string[12];
begin
f(A,B);
f(C,D);
end;
This result in an compiler error E2008 Incompatible types.
Why is that? Can I write a procedure that can take arrays of shortstring (any length of arrays/strings)?
Why use shortstring?
The shortstings are fields in an existing record. There are alot of these record with thousand of shortstrings. In an effort to migrate data from turbo power B-Tree Filer to SQL databases one step is to convert the record to a dataset, and the back to a record, to confirm all fields are converted correctly both directions. I have been using CompareMem on the records to check this, but it does not provide enough information as to which field a conversion error is in. Thus a small program was created, which from the record definition can generate code to compare the two records. It was for this code generator I needed a function to compare shortstrings. It ended up using CompareMem on the shortstrings.
A ShortString is 0 to 255 characters long. The length of a ShortString can change dynamically, but memory is a statically allocated 256 bytes, the first byte stores the length of the string, and the remaining 255 bytes are available for characters, whilist string[5] declared in this way allocate only as much memory as the type requires (5 byte + 1 byte for length).
you could use type
type
MyString = string[5];
...
procedure f(const a, b: Array of MyString);
...
var
A, B: array[1..2] of MyString;
begin
f(A,B);
end;
In a similar situation I've used the following:
type
TOpenArrayOfOpenString = record
strict private
FSizeOfString: Integer;
FpStart: PChar;
FArrayLength: Integer;
function GetItemPtr(AIndex: Integer): PShortString;
public
constructor Init(var AFirstString: Openstring; AArrayLength: Integer);
function Equals(const AArray: TOpenArrayOfOpenString): Boolean;
property SizeOfString: Integer read FSizeOfString;
property pStart: PChar read FpStart;
property ArrayLength: Integer read FArrayLength;
property ItemPtrs[AIndex: Integer]: PShortString read GetItemPtr; default;
end;
{ TOpenArrayOfOpenString }
constructor TOpenArrayOfOpenString.Init(var AFirstString: Openstring; AArrayLength: Integer);
begin
FSizeOfString := SizeOf(AFirstString);
FpStart := #AFirstString[0]; // incl. length byte!
FArrayLength := AArrayLength;
end;
function TOpenArrayOfOpenString.Equals(const AArray: TOpenArrayOfOpenString): Boolean;
begin
Result := CompareMem(pStart, AArray.pStart, SizeOfString * ArrayLength);
end;
function TOpenArrayOfOpenString.GetItemPtr(AIndex: Integer): PShortString;
begin
Result := PShortString(pStart + AIndex * SizeOfString);
end;
You could use it like this:
procedure f(const a: TOpenArrayOfOpenString);
var
i: Integer;
begin
for i := 0 to Pred(a.ArrayLength) do
Writeln(a[i]^);
end;
procedure Test;
var
A: array[1..2] of string[5];
C: array[1..40] of string[12];
begin
f(TOpenArrayOfOpenString.Init(A[1], Length(A)));
f(TOpenArrayOfOpenString.Init(C[1], Length(C)));
end;
It's not as elegant as a solution built into the language could be and it is a bit hacky as it relies on the fact/hope/... that the strings in the array are laid out contiguously. But it worked for me for some time now.
type
shortStrings =array[1..2] of string[5];
...
a,b : shortString;
..
procedure rock(a,b : shortStrings);
..
You are combining two different kinds of open array.
First, there is the classic Turbo Pascal "string" (also called "openstring" in Delphi IIRC) which is essentially string[255]. As string[255] is a superset of all shortstrings, the open array aspect simply converts all shortstring types to it.
The "array of xx" syntax is the Delphi (4+?) open array. It is an open array of any type, not just strings, and the syntax to call it is f(nonarrayparam,[arrayelement0,arrayelement1]);
Somehow you seem to mix both syntaxes, and even aggrevate it by adding CONST which sollicits pass by reference and excludes conversions.
I think you assume shortstring has an performance advantage. It has, in some cases. Open array is not one of those cases. Not even in TP :-)

How do I check if an array contains particular index in Pascal?

E.g. I have this array:
type
OptionRange = array[ 1..9 ] of integer;
How do I check if array[x] exists?
Actually, I want to limit user input with the array index. Am I doing the wrong thing? Is there a better practical solution?
In Free Pascal and the Borland dialects (and perhaps elsewhere as well), you can use the Low and High functions on the array type or a variable of the array type. I see this used most often to determine the bounds for for loops:
var
range: OptionRange;
i: Integer;
begin
for i := Low(range) to High(range) do begin
range[i] := GetOptionRangeElement(i);
end;
end;
You can also define a subrange type and then use it to define both the array and the index variables you use on the array:
type
OptionRangeIndex = 1..9;
OptionRange = array[OptionRangeIndex] of Integer;
var
range: OptionRange;
i: OptionRangeIndex;
Then, when you have range checking enabled (assuming your compiler offers such a feature) and you use a value that's outside the range for OptionRange indices, you'll get a run-time error that you can catch and handle however you want.
I'm not really sure what an option range is or why an array of nine integers would be used to represent one, but I figure that's a name-selection issue.

Resources