UUID is saving differently for the same person in Minecraft bungeeCord Network - uuid

The bungeeCord worked perfectly last week, but in the last 2 days I'm receiving some errors. While I'm switching between 2 different internal servers, In one of them it writes the correct UUID while in the other one it randomizes it, I checked on Name MC and UUID MC to check if the UUID is registered, but only one of them is.
The problem starts when all my data goes through my database that's based on MySQL. MySQL differentiates between players using their UUID, and brings them their correct data using their UUID. But while the same player has 2 different UUID's in 2 different servers. And when he logs in to the other server, it doesn't save the data properly.
namemc-1:enter image description here
namemc-2:enter image description here
Mysql:enter image description here

I think the reason for this magic is the online-mode config variable and setted be to false on sub server. This problem can be found in multiple discussions:
https://www.spigotmc.org/threads/bungeecord-uuid-issues.313639/

Related

Bartender 2016 R9 Integration will not print when specifying "Copies" using SQL database as source

I have designed a label using Bartender 2016 R9. All the fields use Embedded Data as the type. It prints to a Zebra thermal printer just fine.
I have setup a Bartender Integration which uses this label and uses an ADO.NET connection to get values from an Azure SQL database. The connection works fine as I can see the data in the Record Browser in Bartender Integration. I have used the "Specify values for named data sources" option and filled in the fields and also set the "Copies" option to the database quantity column.
The issue I have is that when I do a test print, I get the following error:
Unable to run action 'Print Document' because the copies per serial number value of '%quantity%' is invalid. Please specify a valid number and re-run the action.
If I manually enter a number into the "Copies" field and print it works just fine.
I thought it might a data type issue as the column in the database was set to INT so changed it to VARCHAR, same issue. I also tried manually changing the SQL statement to convert from INT to VARCHAR but I get the same error.
The error mentions serial numbers yet the label is not setup to use them and the option to use serial numbers in the integration is also unchecked. The fact that it will print if I manually enter a number into the Copies field suggests that it has nothing to do with this.
Any help would be appreciated.
Thanks
Chris
One thing I'd ask is are you using the 'ZDesigner' Driver (from the Zebra manufacturers) or the 'Zebra' Print driver from Seagull (from the makers of Bartender) for the printer? I have found that the Seagull Driver works better with Bartender Software... go figure :)
The seagull drivers actually have different preferences and options on the Printer Properties tabs.
You might also check to see if you have any conditional Print settings in the btw file. If you look under the document's page setup, on the Templates tab, there is a 'print when' option.

Using variable DB Connections in different identical Kettle Jobs/Transformatioins

I've read thru many related topics here, but don't seem to find a solution. Here's my scenario:
I have multiple identical customer databases
I use ETL to fill special tables within these databases in order to use as a source for PowerBI reports
Instead of copying (and thus maintaining) the ETLs for each customer, I want to pass the DB connection to the Jobs/Transformations dynamically
My plan is to create a text file of DB connections for each Customer:
cust1, HOST_NAME, DATABASE_NAME, USER_NAME, PASSWORD
cust2, HOST_NAME, DATABASE_NAME, USER_NAME, PASSWORD
and so on...
The Host will stay the same always.
The jobs will be started monthly using Pentaho kitchen in a linux box.
So when I run a Job for a specific customer, I want to tell the job to use the DB connection for that specific customer, i.e. Cust2. from the Connection file.
Any help is much appreciated.
Cheers & Thanks,
Heiko
Use parameters!
When you define a connection, you see a small S sign in a blue diamond on the right of the Database Name input box. It means that, instead of spelling the name of the database, you can put in a parameter.
The first time you do it, it's a bit challenging. So follow the procedure step by step, even if you are tempted to go straight to launch a ./kitchen.sh that reads a file containing a row per customer.
1) Parametrize your transformation.
Right-click anywhere, select Properties then Parameters, fill the table:
Row1 : Col1 (Parameter) = HOST_NAME, Col2 (Default value) = the host name for Cust1
Row2 : Col1 = DATABASE_NAME, Col2 = the database name for Cust1
Row3 : Col1 = PORT_NUMBER, Col2 = the database name for Cust1
Row4 : Col1 = USER_NAME, Col2 = the database name for Cust1
Row5 : Col1 = PASSWORD, Col2 = the database name for Cust1
Then go to the Database connection definition (On the left panel, View tab) and in the Setting panel:
Host name: ${HOST_NAME} -- The variable name with a "${" before and a a "$" after
Database name: ${DATABASE_NAME} -- Do not type the name, press Crtl+SPACE
Port Number: ${PORT_NUMBER}
Database name: ${USER_NAME}
Database name: ${PASSWORD}
Test the connection. If valid try a test run.
2. Check the parameters.
When you press the run button, Spoon prompts for some Run option (If you checked the "Don't show me anymore" in the past, use the drop-down just near by the Run menu).
Change the values of the parameters for those of Cust2. And check it runs for the other customer.
Change it on the Value column and the Default value column. You'll understand the difference in a short while, for the moment check it works with both.
3. Check it in command line.
Use pan from the command line.
The syntax should look like :
./pan.sh -file=your_transfo.ktr -param=HOST_NAME:cust3_host -param=DATABASE_NAME:cust3_db....
At this point, you have a small bit of trials and errors, because the syntax between = and : varies sightly with the OS and the PDI version. But you should get by with 4-6 trials.
4. Make a job
Do to the parallel computing paradigm of PDI, you cannot use the Set variable step in a single transformation. You need to make a job with two transformation : the first reads the csv file and define the variables with the Set variable step. The second is the transformation you just developed and tested.
Don't expect to make it run on the first trial. Some versions of the PDI are buggy and requires, for example to clean the default value of the parameters in the transformation. You are helped with the Write to log step which will write a field in the log of the calling job. Of course you will need to first put the parameters/variables in a field with the Get variable step.
In particular, do not start with the full customer list! Set the system up with 2-3 customers before.
Write the full list of customer in your csv, and run.
Make a SELECT COUNT(customer) on your final load. This is important, because you will probably want to load as many customer as possible, so to continue the process even in case of failure. This is the default behavior (on my best memory), so you won't probably notice a failure in the log if there is a large number of customer.
5. Install the job
In principle, it is just a ./kitchen.sh.
However, if you want to automate the load, you will have a hard time for checking that nothing went wrong. So open the transformation an use the System date (fixed) of the Get System Info step and write the result with the customer data. Alternatively you can get this date in the main job and pass it along the other variables.
If you have concerns about creating a new column in the database, store the list of customers loaded by day, in another table, in a file or send it to you by mail. From my experience, it's the only practical way to be able to answer to a user that claims that their biggest customer was not loaded tree weeks ago.
I run a similar scenario daily in my work. What we do is we use Batch files with named parameters for each client, this way we have the same package KJB/KTR's that run for a different client based on these parameters entirely.
What you want to do is set variables on a master job, that are used throughout the entire execution.
As for your question directly, in the connection creation tab you can use those variables in Host and DBname. Personally, we have the same user/pw set on every client DB so we don't have to change those or pass user/pw as variables for each connection, we only send the host name and database with the Named Parameters. We also have a fixed scheduled run that executes the routine for every database, for this we use a "Execute for each input row" type JOB.

Delphi 7-How to save pictures in a database

I have been trying to create a select-your-picture-and-upload function of a normal user's profile in Delphi 7, but I am running in some problems.
Basically what I want is the following:
User uploads a picture from a folder (which I have achieved through a normal
OpenPictureDialog component)
Said picture gets stored in a database, which is where I'm stuck.
The database is a normal access database.
The table has a unique ID to identify the members and next to that is the picture of each member on the "Picture field" (which is set as a BLOB object).
So in other words my question is the following:
What components do I need to use in order to save a picture to a specified place in my database?
I have found some random code in the net but I'm running into troubles understanding what it does.
ADOQuery.SQL.Text := 'SELECT PictureField FROM YourTable';
ADOQuery.Open();
ADOQuery.Edit();
TBlobField(ADOQuery.FieldByName('PictureField')).LoadFromFile('PathToPictureFile');
ADOQuery.Post();
You can use imageEn component.
Web Site url to get information and download trial

Importing CSV to database (duplicate entries)

My job requires that I look up information on a long spreadsheet that's updated and sent to me once or twice a week. Sometimes the newest spreadsheet leaves off information that was in the last spreadsheet causing me to have to look through several different spreadsheets to find the info I need. I recently discovered that I could convert the spreadsheet to a CSV file and then upload it to a database table. With a few lines of script all I have to do is type in what I'm looking for and Voila! Now I just got the newest spreadsheet and I'm wondering if I can just Import it on top of the old one. There is a unique number for each row that I have set to primary in the database. If I try to import it on top of the current info will it just skip the rows where the primary would be duplicated or would it just mess up my database?
Thought I'd ask the experts before I tried it. Thanks for your input!
Details:
the spreadsheet consists of clients of ours. Each row contains the client's name, a unique id number, their address and contact info. I can set the row containing the unique ID to primary, then upload it. My concern is that there is nothing to signify a new row in a csv file (i think). when I upload it it it gives me the option to skip duplicates but will it skip the entire row or just that cell causing my data to be placed in the wrong rows.. It's apache server IDK what versions of mysql. I'm using 000webhost for this.
Higgs,
This issue in database/ETL terminology is called deduplication strategy.
There is not a template answer for this, but I suggest these helpful readings:
Academic paper - Joint Deduplication of Multiple Record Types
in Relational Data
Deduplication article
Some open source tools:
Duke tool
Data cleaner
there's a little checkbox when you click on import near the bottom that says 'ignore duplicates' or something like that. simpler than i thought.

Keeping database consistency over a REST interface

I have a local database on a mobile database. Every few minutes, the device looks through its tables and sends through a REST interface to a server any data that is not marked as uploaded. Here is an example table:
id | name | phone | uploaded
1 | "bob" | "444" | 0
What gets sent through the REST interface is:
name : "bob", phone : "444"
and the server will respond with:
status : "OK"
Once this "OK" message is received by the mobile device, it will mark those records as uploaded=1. This should work fine to keep the device consistent with what has actually happened. The problem is that the server might receive two of these messages from the mobile device (for whatever reason) and will insert two records with the exact same data into the server database.
What are some ways to stop this from occuring?
I thought of a uniqueness index over all of the fields in the server database, but I feel that there must be a better way.
You can handle this problem server-side or client-side.
As you already noticed, you can avoid duplicate records on your server component just adding a UNIQUE constraint on your database table, maybe to your ID column. With that, every-time you try to store a duplicate message on the DBMS the database will complain.
On the client side, you just need to add a STATE column on your mobile database table: This column should only support three values: UPLOAD_PENDING, UPLOAD_IN_TRANSIT and UPLOAD_FINISHED. This column is meant to replace you UPLOADED field, and should be updated when the tuple is created (UPLOAD_PENDING), you already started an upload (UPLOAD_IN_TRANSIT) and the upload is done (UPLOAD_FINISHED). You should check the state field value before staring a new upload to avoid your duplicate message problem.
You can choose any of this approaches, but I strongly suggest you to use both.

Resources