How can I create a synonym in HyperFileSQL?
I have a table named USER and I cannot Access it via ODBC. I can't rename it, so I want to create a synonym for it. How do I do this?
To create Synonym use this fonction HAlias().
// Create an alias for the ORDERS file
// (Syntax available from version 19)
Orders2013 is Data Source <description=Orders>
IF HAlias(Orders, Orders2013) = True THEN
// ORDERS2013 can now be used in the processes
// It behaves the same way as
// the ORDERS file described in the analysis.
// Modify the directory
HChangeDir(Orders2013, "D:\SalesMgt\Archive2013")
// Modify the name
HChangeName(Orders2013, "Orders")
HOpen(Orders2013)
...
// Processes on the Orders2013 file
...
END
// Cancel the alias
HCancelAlias(Orders)
Related
I am trying to create the external table(xyz) in snowflake by using pattern to load historical file from stage, there are multiple files and using following pattern to load the file name started with below
201802242300_5d80272d1abcd32cc7a981da083ed498.gz. ( Feb 24th 2018 file)
Create external table xyz
(
samplecol1 varchar as (value:samplecol1::varchar),
samplecol2 varchar as (value:samplecol2::varchar),
date as to_date(substr(metadata$filename,1,8),yyyymmdd)
)
partition by (date)
location = #snowflakestage.largetable
pattern='.*/20180224.*[_].*.gz'
file_format = (type = 'JSON');
it's executing successfully but not loading any data. Is my pattern right to pick the file name listed above?
A good way to test patterns is via the LIST command as it takes the same PATTERN option.
thus for you:
LIST #snowflakestage.largetable pattern='.*/20180224.*[_].*.gz'
for example using the CitiBike example data, there are not parque files
so if you try load all files, you get errors.
create stage citibike.public.citibike_trips
url = 's3://snowflake-workshop-lab/citibike-trips';
list #citibike_trips;
name
s3://snowflake-workshop-lab/citibike-trips-parquet/2022/01/08/data_01a19496-0601-8b21-003d-9b03003c624a_3106_4_0.snappy.parquet
s3://snowflake-workshop-lab/citibike-trips-parquet/2022/01/09/data_01a19496-0601-8b21-003d-9b03003c624a_1906_6_0.snappy.parquet
s3://snowflake-workshop-lab/citibike-trips-parquet/2022/01/10/data_01a19496-0601-8b21-003d-9b03003c624a_2206_6_0.snappy.parquet
s3://snowflake-workshop-lab/citibike-trips/json/2013-06-01/data_01a304b5-0601-4bbe-0045-e8030021523e_005_7_0.json.gz
s3://snowflake-workshop-lab/citibike-trips/json/2013-06-01/data_01a304b5-0601-4bbe-0045-e8030021523e_005_7_1.json.gz
s3://snowflake-workshop-lab/citibike-trips/json/2013-06-01/data_01a304b5-0601-4bbe-0045-e8030021523e_005_7_2.json.gz
So I played around till I found this pattern worked for the files I wanted.
list #citibike_trips pattern = '.*trips_.*csv.gz';
We're building dynamic data loading statements for Snowflake using the Python interface.
We want to create a stage at query runtime, and use that stage in a subsequent statement. Table and stage names are dynamic using bind variable.
Yet, it doens't seem like we can find the correct syntax as we tried everything on https://docs.snowflake.com/en/user-guide/python-connector-api.html
COPY INTO IDENTIFIER( %(table_name)s )(SRC, LOAD_TIME, ROW_HASH)
FROM (SELECT t.$1, CURRENT_TIMESTAMP(0), MD5(t.$1) FROM "'%(stage_name)s'" t)
PURGE = TRUE;
Is this even possible? Does it work for anyone?
Your code does not create stage as you mentioned, and you don't need create a stage, instead use table stage or user stage. The SQL below uses table stage.
You also need to change your syntax a little and use more pythonic way : f-strings
sql = f"""COPY INTO {table_name} (SRC, LOAD_TIME, ROW_HASH)
FROM (SELECT t.$1, CURRENT_TIMESTAMP(0), MD5(t.$1) FROM #%{table_name} t)
PURGE = TRUE"""
I could need some help with a Anylogic Model.
Model (short): Manufacturing scenario with orders move in a individual route. The workplaces (WP) are dynamical created by simulation start. Their names, quantity and other parameters are stored in a database (excel Import). Also the orders are created according to an import. The Agent population "order" has a collection routing which contains the Workplaces it has to stop in the specific order.
Target: I want a moveTo block in main which finds the next destination of the agent order.
Problem and solution paths:
I set the destination Type to agent and in the Agent field I typed a function agent.getDestination(). This function is in order which returns the next entry of the collection WP destinationName = routing.get(i). With this I get a Datatype error (while run not compiling). I quess it's because the database does not save the entrys as WP Type but only String.
Is there a possiblity to create a collection with agents from an Excel?
After this I tried to use the same getDestination as String an so find via findFirst the WP matching the returned name and return it as WP. WP targetWP = findFirst(wps, w->w.name == destinationName);
Of corse wps (the population of Workplaces) couldn't be found.
How can I search the population?
Maybe with an Agentlink?
I think it is not that difficult but can't find an answer or a solution. As you can tell I'm a beginner... Hope the description is good an someone can help me or give me a hint :)
Thanks
Is there a possiblity to create a collection with agents from an Excel?
Not directly using the collection's properties and, as you've seen, you can't have database (DB) column types which are agent types.1
But this is relatively simple to do directly via Java code (and you can use the Insert Database Query wizard to construct the skeleton code for you).
After this I tried to use the same getDestination as String an so find via findFirst the WP matching the returned name and return it as WP
Yes, this is one approach. If your order details are in Excel/the database, they are presumably referring to workplaces via some String ID (which will be a parameter of the workplace agents you've created from a separate Excel worksheet/database table). You need to use the Java equals method to compare strings though, not == (which is for comparing numbers or whether two objects are the same object).
I want a moveTo block in main which finds the next destination of the agent order
So the general overall solution is
Create a population of Workplace agents (let's say called workplaces in Main) from the DB, each with a String parameter id or similar mapped from a DB column.
Create a population of Order agents (let's say called orders in Main) from the DB and then, in their on-startup action, set up their collection of workplace IDs (type ArrayList, element class String; let's say called workplaceIDsList) using data from another DB table.
Order probably also needs a working variable storing the next index in the list that it needs to go to (so let's say an int variable nextWorkplaceIndex which starts at 0).
Write a function in Main called getWorkplaceByID that has a single String argument id and returns a Workplace. This gets the workplace from the population that matches the ID; a one-line way similar to yours is findFirst(workplaces, w -> w.id.equals(id)).
The MoveTo block (which I presume is in Main) needs to move the Order to an agent defined by getWorkplaceByID(agent.workplaceIDsList.get(nextWorkplaceIndex++)). (The ++ bit increments the index after evaluating the expression so it is ready for the next workplace to go to.)
For populating the collection, you'd have two tables, something like the below (assuming using strings as IDs for workplaces and orders):
orders table: columns for parameters of your orders (including some String id column) other than the workplace-list. (Create one Order agent per row.)
order_workplaces table: columns order_id, sequence_num and workplace_id (so with multiple rows specifying the sequence of workplace IDs for an order ID).
In the On startup action of Order, set up the skeleton query code via the Insert Database Query wizard as below (where we want to loop through all rows for this order's ID and do something --- we'll change the skeleton code to add entries to the collection instead of just printing stuff via traceln like the skeleton code does).
Then we edit the skeleton code to look like the below. (Note we add an orderBy clause to the initial query so we ensure we get the rows in ascending sequence number order.)
List<Tuple> rows = selectFrom(order_workplaces)
.where(order_workplaces.order_id.eq(id))
.orderBy(order_workplaces.sequence_num.asc())
.list();
for (Tuple row : rows) {
workplaceIDsList.add(row.get(order_workplaces.workplace_id));
}
1 The AnyLogic database is a normal relational database --- HSQLDB in fact --- and databases only understand their own specific data types like VARCHAR, with AnyLogic and the libraries it uses translating these to Java types like String. In the user interface, AnyLogic makes it look like you set the column types as int, String, etc. but these are really the Java types that the columns' contents will ultimately be translated into.
AnyLogic does support columns which have option list types (and the special Code type column for columns containing executable Java code) but these are special cases using special logic under the covers to translate the column data (which is ultimately still a string of characters) into the appropriate option list instance or (for Code columns) into compiled-on-the-fly-and-then-executed Java).
Welcome to Stack Overflow :) To create a Population via Excel Import you have to create a method and call Code like this. You also need an empty Population.
int n = excelFile.getLastRowNum(YOUR_SHEET_NAME);
for(int i = FIRST_ROW; i <= n; i++){
String name = excelFile.getCellStringValue(YOUR_SHEET_NAME, i, 1);
double SEC_PARAMETER_TO_READ= excelFile.getCellNumericValue(YOUR_SHEET_NAME, i, 2);
WP workplace = add_wps(name, SEC_PARAMETER_TO_READ);
}
Now if you want to get a workplace by name, you have to create a method similar to your try.
Functionbody:
WP workplaceToFind = wps.findFirst(w -> w.name.equals(destinationName));
if(workplaceToFind != null){
//do what ever you want
}
feature
open_file_sample
local
l_file: UNIX_FILE_INFO
l_path: STRING
do
make
l_path := "/var/log/syslog"
l_file.update (l_path)
if l_file.parent_directory.exists and then l_file.parent_directory.is_writtable then
create l_file.make
end
-- AS the above statement doesn't exist!
check
syslog_file_exists_and_is_readable: l_file.exists and then l_file.is_readable
end
end
Is this the proper way to check for file existence in Eiffel?
I was wondering if there is a way not to create 2 objects. I'll complete my check with following statement:
define path `l_file_path := "/some/path/with_file.log"
check if parent directory exists and has rights to write into
create log file
The problem when accessing the file system is that the property of a file or directory may have changed between the time you query it and the time you want to use it (even if it's only a small fraction of a second). Because of that, assertions in Eiffel of the form:
f (a_file: RAW_FILE)
require
a_file.is_writable
do
a_file.open_write
may be violated. In the Gobo Eiffel libraries, instead of checking whether a file can be opened in write mode before actually opening it, the revert approach was chosen: try to open the file, and check whether it was opened successfully.
f (a_pathname: STRING)
local
l_file: KL_TEXT_OUTPUT_FILE
do
create l_file.make (a_pathname)
l_file.recursive_open_write
if l_file.is_open_write then
-- Write to the file.
l_file.close
else
-- Report the problem.
end
Note that it uses recursive_open_writeand not just open_write so that missing directories in the path get created as well.
You can use
{FILE_UTILITIES}.file_exists (the_file_name)
or
(create {RAW_FILE}.make_with_name (the_file_name)).exists
do
if not l_file.exists then
print ("error: '" + l_path + "' does not exist%N")
else
...
You can something similar to this
My final solution is following, and is subject to critics, I personnaly find it very complicated in comparison to more low level languages and libs (as bash for ex)
log_file_path: detachable PATH
-- Attached if can be created
local
l_file: UNIX_FILE_INFO
l_path, l_parent_dir: PATH
l_fu: FILE_UTILITIES
do
create l_fu
-- Parent directory check
create l_path.make_from_string ({APP_CONFIGURATION}.application_log_file_path)
l_parent_dir := l_path.parent
if not l_fu.directory_exists (l_parent_dir.out) then
l_fu.create_directory_path (l_parent_dir)
end
create l_file.make
l_file.update (l_parent_dir.out)
if not l_file.exists or
l_file.is_access_writable
then
io.putstring ("Error: " + log_file_path_string + " parent directory is not writtable and cannot be created")
check
parent_dir_exists_and_is_writtable: False
end
else
Result := l_path
end
ensure
file_name_could_be_created: Result /= Void
end
I'm trying to create a UUID id in a table with PostgreSQL. I tried with:
id uuid PRIMARY KEY DEFAULT uuid_generate_v4()
But I get:
ERROR: function uuid_generate_v4() does not exist
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
I tried with adding the schema like: id uuid PRIMARY KEY DEFAULT public.uuid_generate_v4() (as seen in a comment here)
I also checked if the extension is there (SELECT * FROM pg_available_extensions;), and yes I have it installed in the PostgreSQL database:
I read that if the Postgres is runing in --single mode, this may not work, but I don't know how to test it or if there is any way to do it.
Somebody knows how I can resolve the problem? Or any other option?
Is it a good idea to use like this:
SET DEFAULT uuid_in(md5(random()::text || now()::text)::cstring);
Because the function uuid_generate_v4 is not found, it suggests that the extension uuid-ossp is not loaded
pg_available_extensions lists the extensions available, but not necessarily loaded.
to see the list of loaded extensions query the view pg_extension as such:
select * from pg_extension;
To load the uuid-ossp extension run the following:
CREATE EXTENSION "uuid-ossp";
note: this will require super user privileges.
After the uuid-ossp extension is successfully loaded, you should see it in the pg_extension view & the function uuid_generate_v4 should be available.
In my case I needed to add the schema to the function call like this: app.uuid_generate_v4()
instead of this: uuid_generate_v4()
I found the schema for each extension by running this query:
SELECT
pge.extname,
pge.extversion,
pn.nspname AS schema
FROM pg_extension pge
JOIN pg_catalog.pg_namespace pn ON pge.extnamespace = pn."oid" ;