Summary
A RESTful POST request POST /request/{requestName}
POST /request/CreateProduct
{
"Code": 4711,
"Name": "My product"
}
is to be validated:
if the given ${header.requestName} has a corresponding row in the
database and
if the provided parameters match the requestName
according to another table in the database, specifially if all required params are present and have the correct data type
Current route sample config
restConfiguration().component("netty4-http").port(8080).bindingMode(RestBindingMode.json);
rest("request/{requestName}").post()
.consumes("application/json; charset=UTF-8")
.produces("application/json; charset=UTF-8")
.to("direct:newRequest");
from("direct:newRequest").transform().simple("Received request: ${header.requestName}, Body: ${in.body}");
DB tables (MariaDB) to validate against
Table: request
id | name
------------------
1 | CreateProduct
2 | UpdateProduct
3 | DeleteProduct
Table: request_parameter
id | name | type
-------------------
1 | Code | INT
2 | Name | STRING
2 | Price | INT
Table request_to_parameter
request | paramater | required
------------------------------
1 | 1 | 1
1 | 2 | 1
1 | 3 | 0
Question
Is this possible with pure Camel? Or should I implement my own helper function? How to I include my own custom function in a Camel route?
You can implement a customer processor as described here:
http://camel.apache.org/processor.html
With the Exchange object you have access to header and body and can extract the necessary information to validate.
Related
I'm new to Snowflake and facing issue while uploading files from local to Snowflake table stage. I'm following Snowflake in 20 minutes training and stuck at the file uploading part, any help/pointer will be much appreciated. I have added the screen shot of the error in this post and below is one of the error message.
Error Message:
| source | target | source_size | target_size | source_compression | target_compression | status | message
|
|-----------------+--------------------+-------------+-------------+--------------------+--------------------+--------+-------------------------------
---------------------------------------|
| employees01.csv | employees01.csv.gz | 370 | 0 | NONE | GZIP | ERROR | Unknown Error in uploading a file: C:\Users\xxxx\AppData\Local\Temp\tmpj5iva8r7\employees01.csv_c.gz#y89s9rn7, file=c:\Temp\employees01.csv, real file=C:\Users\xxxx\AppData\Local
\Temp\tmpj5iva8r7\employees01.csv_c.gz |
Many Thanks!
Bharat
enter image description here
Can you please give it try to put file://c:\Temp\employees*.csv #%emp_basic;
Details: https://docs.snowflake.com/en/user-guide/data-load-local-file-system-stage.html
This is just my curiosity . On my google dashboard , there is one score card that doesn't get some data from BigQuery . This data comes from CV column and I can see 4 data in BigQuery but on dashboard just only 3 with same filter. I have experienced this kind problem before . Is this a bag ??
client_id | pscenario | orderid | cv_date
AAAAAAAAA | main | 11111111 | 2020-10-01
BBBBBBBBB | main | 22222222 | 2020-10-02
CCCCCCCCC | main | 33333333 | 2020-10-03
DDDDDDDDD | main | 44444444 | 2020-10-04
Dashboard-filter
pscenario - main
it should be 4 but only 3 is shown up on my dashboard
I solved this problem by reconnecting data resource .
I have a CentOS server running a local memsql cluster (aggregator and leaf on the same machine). I have a databse named offers. For some reason, I cannot execute any queries against tables in my database.
Everything was working fine until I tried to add another machine to the cluster. I had the IT team at my place replicate the server I was working on (completely). I went over to the replicated server, deleted the database in question and then registered the server using the memsql-toolbox-config register-node command. Then the database showed it was under the transition state. I restarted memsql using memsql-ops and got to this situation.
Running a simple query yields:
memsql> select * from table;
ERROR 2261 (HY000): Query `select * from table` couldn't be executed because of an in progress failover operation. Check the status of the leaf nodes in the cluster (error 1049:'Leaf Error (172.26.32.20:3307): Unknown database 'offers_5'')
The output for the the cluster status command is:
memsql> show cluster status;
+---------+--------------+------+----------+-------------+-------------+----------+--------------+-------------+-------------------------+----------------------+----------------------+---------------+-------------------------------------------------+
| Node ID | Host | Port | Database | Role | State | Position | Master Host | Master Port | Metadata Master Node ID | Metadata Master Host | Metadata Master Port | Metadata Role | Details |
+---------+--------------+------+----------+-------------+-------------+----------+--------------+-------------+-------------------------+----------------------+----------------------+---------------+-------------------------------------------------+
| 1 | 172.26.32.20 | 3306 | cluster | master | online | 0:181 | NULL | NULL | NULL | NULL | NULL | Reference | |
| 1 | 172.26.32.20 | 3306 | offers | master | online | 0:156505 | NULL | NULL | NULL | NULL | NULL | Reference | |
| 2 | 172.26.32.20 | 3307 | cluster | async slave | replicating | 0:180 | 172.26.32.20 | 3306 | 1 | 172.26.32.20 | 3306 | Reference | stage: packet wait, state: x_streaming, err: no |
| 2 | 172.26.32.20 | 3307 | offers | sync slave | replicating | 0:156505 | 172.26.32.20 | 3306 | 1 | 172.26.32.20 | 3306 | Reference | |
+---------+--------------+------+----------+-------------+-------------+----------+--------------+-------------+-------------------------+----------------------+----------------------+---------------+-------------------------------------------------+
4 rows in set (0.00 sec)
So it seems that the the second node is replicating. Also note the details column saying:
stage: packet wait, state: x_streaming, err: no
Running the replication status command gives:
memsql> show replication status;
+--------+----------+------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+---------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
| Role | Database | Master_URI | Master_State | Master_CommitLSN | Master_HardenedLSN | Master_ReplayLSN | Master_TailLSN | Master_Commits | Connected | Slave_URI | Slave_State | Slave_CommitLSN | Slave_HardenedLSN | Slave_ReplayLSN | Slave_TailLSN | Slave_Commits |
+--------+----------+------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+---------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
| master | cluster | NULL | online | 0:181 | 0:181 | 0:177 | 0:181 | 86 | yes | 172.26.32.20:3307/cluster | replicating | 0:180 | 0:181 | 0:180 | 0:181 | 84 |
| master | offers | NULL | online | 0:156505 | 0:156505 | 0:156505 | 0:156505 | 183 | yes | 172.26.32.20:3307/offers | replicating | 0:156505 | 0:156505 | 0:156505 | 0:156505 | 183 |
+--------+----------+------------+--------------+------------------+--------------------+------------------+----------------+----------------+-----------+---------------------------+-------------+-----------------+-------------------+-----------------+---------------+---------------+
2 rows in set (0.00 sec)
I never initiated any fail over or replication. Anyone knows why this is happening? How could I solve this?
EDIT:
Using memsql-ops I get:
[me#memsql ~]$ memsql-ops memsql-list
ID Agent Id Process State Cluster State Role Host Port Version
33829AF Af13af7 RUNNING CONNECTED MASTER 172.26.32.20 3306 6.5.18
BBA1B61 Af13af7 RUNNING CONNECTED LEAF 172.26.32.20 3307 6.5.18
But with memsql-admin, with the new memsql tools:
[me#memsql ~]$ memsql-admin list-nodes
✘ Failed to list nodes on all hosts: failed to list nodes on 1 host:
172.26.32.20
No nodes found
Making my question a bit clearer - How can I get my server to respond to queries again? And after I do, How should I act to add another host? Should I clean the replicated server completely of any memsql data?
2nd EDIT:
I managed to solve this problem by delete my database and cluster data, and setting up a new one using the new MemSQL tools, throwing away MemsqlOps. Read my answer.
It looks like there are a couple things that might be causing problems. Generally speaking, cloning a memsql server is not something that is supported nor the best way to go about adding nodes. It also looks like you may be using both the older Ops management tool and the newer MemSQL tools. I would recommend not installing or using Ops and sticking to just the new MemSQL tools instead.
A good place to start would be to try recreating the nodes after cloning; a cloned memsql node won't correctly become part of the cluster. You should also verify that you don't have more than one master aggregator in the cluster. If you can start with that and see if it resolves your issues I'm happy to help with any other problems that you run into.
I managed to set up a working cluster.
As micahbhakti mentioned in his answer, I tried using only the newer MemSQL tools, instead of the deprecated MemSQL Ops. It required deleting the MemSQL agent existing on both servers and then following the tutorial in the MemSQL documentation. Here are the steps I took for anyone struggling with this issue which is better described as: My MemSQL-Ops-managed-MemSQL-cluster is not responding. How can I upgrade it to a working MemSQL-tools-managed-cluster?
1. Save what data you can
The following step is to delete all memsql data, so it would be best if you could save your data. The table data could be stored in CSV files easily with a simple
SELECT * FROM important_data_containing_table INTO OUTFILE '/home/yourfolder/yourcsvfile.csv';
Do this for all tables containing important data. You could also save the scheme itself. You can do that by viewing and copying to another file all the create queries you used to create the table originally, to re-execute them later. Use this
SHOW CREATE TABLE your_table_name
The documentation for mysql is described here. It might not be similar to the syntax used in mem, but the above base command works. For exact information, read about MySQL Features Unsupported in MemSQL.
2. Delete anything to do with Memsql-Ops
As it is said here about the uninstall command:
Stops the local MemSQL Ops agent and deletes all its data.
If MemSQL nodes are already installed in the local host, this command will prompt users to delete those nodes first before proceeding with the uninstall.
And indeed, if there is a node runnning (in my case there were), you will be prompted to run another command to delete those nodes: memsql-ops memsql-delete --all. This WILL delete all data in your database as said in it's documentation:
Deletes all data for a MemSQL node. This operation is not reversible and may lead to data loss. Users who want to perform this operation are prompted to explicitly type ‘DELETE’ to be sure of their decision.
That's why I asked you to save what ever you need :)
This should be done for each host you want to include in your new shiny cluster.
3. Follow the instructions to create the new cluster using MemSQL tools
After you cleaned your servers from the deprecated MemSQL ops agent and data, you can follow the instructions here. I chose to set up a multiple host comprehensive set up. The process will ask you to register your hosts, and then set up the nodes roles (master aggregator, aggreators and leafs), ip addresses, passwords, ports and etc.
After that, you can try to test the cluster, making changes in one machine and view them in another. Also the output for memsql-admin list-nodes on the deploying machine for my cluster was:
+------------+------------+--------------+------+---------------+--------------+---------+----------------+--------------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group |
+------------+------------+--------------+------+---------------+--------------+---------+----------------+--------------------+
| AAAAAAAAAA | Master | 172.26.32.20 | 3306 | Running | True | 6.7.16 | Online | |
| BBBBBBBBBB | Aggregator | 172.26.32.22 | 3306 | Running | True | 6.7.16 | Online | |
| CCCCCCCCCC | Leaf | 172.26.32.20 | 3307 | Running | True | 6.7.16 | Online | 1 |
| DDDDDDDDDD | Leaf | 172.26.32.22 | 3307 | Running | True | 6.7.16 | Online | 1 |
+------------+------------+--------------+------+---------------+--------------+---------+----------------+--------------------+
4. Restore the data
Re-execute all the create table queries you saved in step 1, and import all data exported to a csv using this syntax:
LOAD DATA INFILE '/home/yourfolder/yourcsvfile.csv' INTO TABLE your_table;
And that's it! Now you can manage your cluster using the new MemSQL studio that run on the default http://your_deployment_machine:8080.
Enjoy :)
I am familiar with using template keywords in data-driven Robot Framework testing and know that external sources of data such as text files and csv files can be used to provide test data. However, the organisation I work for wants to use data held in a database as a source for test case data. Does anybody know if this is possible? I have searched Stack Exchange, Stack Overflow and other resources but cannot find an answer or any examples.
Here is an example of the data-driven approach I am familiar just to give you an idea of where we are now.
*** Settings ***
Library Selenium2Library
Library AFRCLibrary
| Test Template | Suspend Region
*** Variables ***
*** Test Cases ***
| Pillar 1 BPS 2019 Suspend Region | Pillar 1 | 2019 | BPS | BPS Region 1 | Pillar 1 BPS 2019 Suspend Region Comments |
| Pillar 2 FGS 2018 Suspend Region | Pillar 2 | 2018 | FGS | FGS Region 1 | Pillar 2 FGS 2018 Suspend Region Comments |
*** Keywords ***
| Suspend Region
| | [Arguments] | ${pillar} | ${year} | ${scheme} | ${region} | ${comments} |
| | Futures Open Application | http://ags125p01:8080/operationalsite/login | ff |
| | FuturesPublicsiteWM | root | gtn | http://ags125p01:8080/operationalsite/futures/maintain_budget |
| | Select Pillar | ${pillar} | ${year} |
| | Select Scheme | ${scheme} |
| | View |
| | Suspend And Confirm | ${region} | ${comments} |
| | Futures Close Application |
| |
Unfortunately, the use of test templates more-or-less require that the data is hard-coded in the test case. However, the test template is not much more than a wrapper around a for loop. You could do something like this:
| | ${database_rows}= | Run sql query
| | ... | Select * from the_database where ...
| |
| | :FOR | ${row} | IN | #{database_rows}
| | | Suspend Region | #{row}
Of course, this requires that you write the "Run sql query" keyword or an equivalent to fetch the data.
The downside of this is that all of the permutations are considered a single test case with multiple keywords, versus multiple test cases with a single keyword.
If you want to have one test case per row in a database, you could write a script that does the query, generates a test suite file using the results of the query, and then runs pybot on the generated file.
I am totally newbie to Access, I used to use Excel to handle my needs for a while.
But by now Excel has become too slow to handle such a big set of data, so I decided to migrate to Access.
Here is my problem
My columns are:
Number | Link | Name | Status
1899 | htto://example.com/code1 | code1 | Done
2 | htto://example.com/code23455 | code23455 | Done
3 | htto://example.com/code2343 | code2343 | Done
13500 | htto://example.com/code234cv | code234cv | Deleted
220 | htto://example.com/code234cv | code234cv | Null
400 | htto://example.com/code234cv | code234cv | Null
So I want a way to update Status of my rows according to numbers list.
For example I want to update Status column for multiple numbers to become Done
Simply I want to update "Null status" to become "Done" according to this number list
13544
17
13546
12
13548
13549
16000
13551
13552
13553
13554
13555
12500
13557
13558
13559
13560
30
13562
13563
Something like this
I tried "update query" but I don't know how to use criteria to solve this problem
In Excel I did that by "conditional formatting duplicates" -with my number list which I wanted to update-
Then "sort by highlighted color" then "fill copy" the status with the value
I know that Access is different but I hope that there is a way to do this task as Excel did.
Thanks in advance
From my understanding, You can try
Update TblA
Set TblA.Status="Done"
where Number in (13544,17,13546,....)
Or alternatively easy method is to pull these numbers in IN clause into its own table and use it like this
Update TblA
Set TblA.Status="Done" where Number in (select NumCol from NumTable )
or this solution may help you Here