I have created new api in google app engine using google cloud endpoint module.
Created api contains api method, this method have connect cloud SQL table for insert and extract operation. Now its under development only so used MySQL for development,here when i try to insert non English letters shows only ?.
My doubt is if suppose moved on cloud SQL this problem was solved or not?
If suppose want to change a code for handle non English letters ? and how?
First, you will want to make sure that the character set used for your tables is utf8mb4. The default character set in mysql in latin1.
You can set the character set on the database using:
ALTER DATABASE db CHARACTER SET = utf8mb4 COLLATE = utf8mb4_unicode_ci;
Note that this won't affect existing tables in the database. If this is a new database and you don't care about the data yet, it would be simpler to re-create the tables using utf8mb4.
Related
I am trying to copy CSV files from my local directory into a SQL Server database running in my local machine by using Apache NiFi.
I am new to the tool and I have been spending few days googling and building my flow. I managed to connect to source and destination but still I am not able to populate the database since I get the following error: "None of the fields in the record map to the columns defined by the tablename table."
I have been struggling with this for a while and I have not been able to find a solution in the Web. Any hint would be highly appreciated.
Here are further details.
I have built a simple flow using GetFile and PutDatabaseRecord processors 1.
My input is a simple table with 8 columns 2.
My configurations for GetCSV process are here (I have added the input directory and left the rest as default): 3
The configuration for PutDatabaseRecord process is here (I have referred to the CSVReader and DBCPConnectionPool controller services, used the MS SQL 2012+ database type (I have 2019 version), configured INSERT statement type, inserted the schema and correct table name and left everything else as default): 4
The CSVReader configuration looks as shown here (Schema Access Strategy = Use String Fields From Header; CSV Format = Microsoft Excel): 5
And this is the configuration of the DBCPConnectionPool (I have added the correct URL, DB driver class name, driver location, DB user and password): 6
Finally, this is a snapshot of the description of the table I have created in the database to host the content: 7
Many thanks in advance!
The warning "None of the fields in the record map to the columns defined by the tablename table." is also obtained when the processor is not able to find the table and this can happen also when the table name is correctly configured in PutDatabaseRecord but there is some issue with user access rights (which ended up to be the actual cause of my error ...).
I enabled Always Encrypted option on a varchar column in database (MS SQL 2019).
Now, ColdFusion (2016) application is generating an error ByteArray objects cannot be converted to strings.
How to enable or register encryption for this database in the CF 2016 server?
UPDATE:
Thanks to Miguel-F comment, I went with this guide below to enable Always Encrypted and configuration of data source in CF Administrator.
https://community.hostek.com/t/sql-server-always-encrypted-columns/315#heading--ColdFusion
But, I stack on the paragraph under the heading
Using Always Encrypted Columns with ColdFusion
....
You must also ensure that the ColdFusion service account has access to the
private key. This service usually runs under the ‘cfusion’ user so you will
want to give read permissions for that user to the private key of the
‘Column Master Key’.
Do I need to create a standard user and login as that user and assign to the service ColdFusion 2016 Application Server? Does this service is reffered here as a "cfusion"?
Then, how would I give that user read permissions for the private key of the ‘Column Master Key’? Is that running certlm?
The column that I encrypted with Always Encrypted option is nvarchar(50), when encrypted, the collation changed to Latin1_General_BIN2.
Still getting this error while open the page with the reference to the column
ByteArray objects cannot be converted to strings.
Any help would be greatly appreciated.
Gena
I have been looking for equivalent of Oracles sys_context ('USERENV', 'Client_Identifier') in SQL Server.
I have been actively looking but got no correct answer, the above function is used in Oracle to identify the front end application user and I want to identify those application users in SQL Server. So far everywhere everyone is giving answers and I've looked into them but they are giving answers about database level, like which user is logged in right now on database but I want to identify the front end application user and use its email or name to insert into a field
Check out SP_SET_SESSION_CONTEXT to set a value
https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-set-session-context-transact-sql?view=sql-server-ver15
and SESSION_CONTEXT to retrieve it
When I surf to the deployed web app on azure, and I want to save e.g. a comment or a post to this blog project, I get error message that I can't save into the db which I have on azure. Yet, I have transformed the connection string to point to the right db on azure in my web.release.config, and I can see data from the db too, when I search for a blog or a post on this web app, data is displayed from the db on azure.
But again, I can not save any data to that db from the web app. So when I want to create a new comment or blog, I can not do that and I get something like:
Cannot insert the value NULL into column 'Id', table 'Blog.dbo.Comments'; column does not allow nulls. INSERT fails.
Yet, I can create a blog or comment locally and it works just fine, but that is to the local db.
Probably you don't set the Id in your code and on your localhost you have some king of auto increment on the Id field in your DB ( Auto increment primary key in SQL Server Management Studio 2012 ), but you don't have that in your Azure SQL, so it won't generate it for you automatically, so it will be null, but probably it is set to non nullable, and this is why you get the error message. If you change it like you can see in the link above, it should work.
Is there a way for a normal user (client-side) without elevated privileges (no special database permissions, local administrator, or any of the sort) on the server to get any kind of unique ID from a server (MAC address, database installation ID, server hardware ID) or anything of the kind?
Basically I am looking for an ID to verify the installation. I know I can do it by writing some sort of ID into registry and the database to install server-side, but is there a way to do it without installing anything? The minimum requirements for that is that I get that from MySQL and SQL Server with Linux and Windows.
My current research suggests that there is no such thing. As seen in the comment below:
I think any answer is going to require xp_cmdshell since unique
hardware information is not exposed directly to SQL Server
I dont think you can get hardware details directly from Sql-Server . it may possible throw the other programs which can contribute with both sql-server and your system hardware .
there is a pre-define function in sql server which will give you unique and random id .
create table TableName
(
id varchar(max) default newid()
)
Best I can find is that you can use file_guid from sys.database_files. You only need to be in the public role to do that and that should be unique per DB. If you create or remove database files, you'll run into trouble, and it doesn't do anything about verifying that you're on the same server.
Note that if your DB was created prior to SQL Server 2005, this value will be null since it didn't exist.