Error msg on phpmyadmin when installing database #1050 - Table 'batch' already exists - database

First, extract the downloaded file from Themeforest.
Step 2. You will see the folders "DATABASE", "Documentation", "For
Exits Drupal Installation", "For New Fresh Drupal Installation"
Step 3: Create a database and a user name for that database. Please
set the database permission for user name.
Step 4. Open phpMyadmin and select the database you just created.
Then enter the DEMO database in the "DATABASE" folder. Make sure
there aren't any errors during the import of the database.
after step 4, below msg pop out.
Table structure for table batch--CREATE TABLE batch ( bid
int(10) UNSIGNED NOT NULL COMMENT 'Primary Key: Unique batch ID.',
token varchar(64) CHARACTER SET ascii NOT NULL COMMENT 'A string
token generated against the current user''s session id and the batch
id, used to ensure that only the user who submitted the batch can
effectively access it.', timestamp int(11) NOT NULL COMMENT 'A Unix
timestamp indicating when this batch was submitted for processing.
Stale batches are purged at cron time.', batch longblob COMMENT 'A
serialized array containing the processing data for the batch.')
ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='Stores details about
batches (processes that run in…';

Related

External table Always insert in a new file, is there any way to write to same file?

I have a external table in SQL Server which points to CSV files in folder of Azure blob storage, I enabled polybase export and trying to insert data using insert query. It works but it always creates new file.
Is there any way I can write to single file or give file name while insert?
Here's my table
CREATE EXTERNAL TABLE archive.filetransferauditlog (
[id] [int] NULL,
[STATUS] [varchar](10) NULL,
[EVENT] [varchar](10) NULL,
[fileNameWithPath] [varchar](2048) NULL,
[eventStartDate] [datetime] NOT NULL,
[eventEndDate] [datetime] NOT NULL,
[description] [varchar](4096) NULL,
[loggedInUserId] [int] NULL,
[transferType] [int] NULL
)
WITH (
LOCATION = '/filetransferauditlog/',
DATA_SOURCE = archivepurgedataExternalDataSource,
FILE_FORMAT = ParquetFile
)
GO
Query I am using:
Insert into archive.filetransferauditlog
select Top(5)
from dbo.filetransferauditlog
Please suggest me any way we can give the file name while insert.
When I try to give location for table to a single file instead of directory, I am able to run select query but not insert.
It returns below error:
java.sql.SQLException: Cannot execute the query "Remote Query" against OLE DB provider "SQLNCLI11" for linked server "SQLNCLI11". CREATE EXTERNAL TABLE AS SELECT statement failed as the path name 'wasbs://demoarchive#testarchivedemo.blob.core.windows.net/filetransferauditlogText/QID5060_20220607_54101_0.txt' could not be used for export. Please ensure that the specified path is a directory which exists or can be created, and that files can be created in that directory.
java.sql.SQLException: Cannot execute the query "Remote Query" against OLE DB provider "SQLNCLI11" for linked server "SQLNCLI11". CREATE EXTERNAL TABLE AS SELECT statement failed as the path name 'wasbs://demoarchive#testarchivedemo.blob.core.windows.net/filetransferauditlogText/QID5060_20220607_54101_0.txt' could not be used for export. Please ensure that the specified path is a directory which exists or can be created, and that files can be created in that directory.
As per MsDocs, The reason behind getting this error because of PolyBase external table pointed out repeatedly reads all the files, so column or data may be discrepancy
Hope you created the external table first and then use INSERT INTO SELECT to export to the external location. While exporting, only data can be exported not a column
Please suggest me any way we can give the file name while insert
If you want data in each table consists in single file, use FILE name in the location section by the directory
If you want multiple file in table set the files into different directories
If you want to create a table on top of csv file just use LOCATION '/warehouse/develop/myfile
Make sure you are giving correct path in location attributes are not, give your location along with your file name as below
Create External Table Your table name
(col1 int)WITH (LOCATION= '/filetransferauditlog/ your file name' ,DATA_SOURCE); OR
Create External Table Your table name
(col1 int) WITH (LOCATION = '/filetransferauditlog/ ',DATA_SOURCE);
For inserting into same file use Join query while export please check whether you are using correct format as sample below
INSERT INTO dbo.filetransferauditlog
SELECT T.* FROM Insured_Customers T1 JOIN CarSensor_Data T2
ON (T1.CustomerKey = T2.CustomerKey)
WHERE T2.YearMeasured = 2009 and T2.Speed > 40;
For more information in detail, please refer below links:
PolyBase query scenarios
PolyBase errors and possible solutions
Here is the query that you should execute
SELECT *
INTO OUTFILE 'C:\\Donnees\\dev\\table_exp.txt'
FIELDS TERMINATED BY ';' ENCLOSED BY '"'
LINES STARTING BY 'export-table' TERMINATED BY '$\n'
FROM name_of_table;
Note: Into outfile his is the file path
fields terminated by:his is the file path
enclosed by:the symbol that frames the column values
lines starting by this is how your recording in the file will start
TERMINATED BY:with which symbol does your recording end?
We posted the same question in to Microsoft community and we found answer and the workaround from there.
https://techcommunity.microsoft.com/t5/sql-server/external-table-always-insert-in-a-new-file-is-there-any-way-to/m-p/3480998#M1680
Thanks everyone for the quick help.

yugabyte audit logs doesn't show in tserver logs

I've enabled audit logs in yugabyte following instructions here: https://docs.yugabyte.com/preview/secure/audit-logging/audit-logging-ysql/
To test it, I ran the create table command (in pgadmin4) and I saw the expected audit log in the query terminal for e.g
NOTICE: AUDIT: SESSION,2,1,DDL,CREATE TABLE,TABLE,public.employees,
"create table employees ( empno int, ename text, address text, salary int,
account_number text );",<not logged>
CREATE TABLE
However when I try to find the same log snippets in tserver log files, I don't see any entries which would confirm that my audit loggings are working. Is there a way to fix this?
Found it in the postgres log file.

Pulling rows from .log file into SQL Server table

I have a very flat, simple log file (6 rows of which one row is blank) that I want to insert into a simple 5 column SQL Server table.
Please excuse my SQL ignorance as my knowledge around this topic is not educated.
Below is the .log file content :-
-----------Log File content start----------
07/30/2016 00:02:03 : BATCH CLOSE SUMMARY
MerchantID - 000022673665
TerminalID - 013
BatchItemCount - 650
NetBatchTotal - 5095.00
----------Log file content end-------------
Below is the simple SQL Server table layout:
CREATE TABLE dbo.CCClose
(
CloseTime NVARCHAR(50) NOT NULL,
MercID NVARCHAR(50) NOT NULL,
TermID NVARCHAR(50) NOT NULL,
BatchCount NVARCHAR(30) NOT NULL,
NetBatcTotal NVARCHAR(50) NOT NULL
);
I'm hoping that somehow have each row looked at by SQL for example:
if .log file like 'Batch close Summary' then insert into CloseTime else
if .log file like 'MerchantID' then insert into MercID else
if .log file like 'BatchItemCount' then insert into BatchCount else
if .log file like 'NetBatchTotal' then insert into NetBatchTotal
Off course it would be great if the proper formatting for each column was in place but at this time I just looking at getting the .log file data populated from a directory of these logs.
I plan to use Crystal Reports to build on the SQL Server tables.
This is not going to be a simple process. You can probably do it with bulk insert. The idea is to read it into a staging table, using:
a record terminator of something like "----------Log file content end-------------" + newline
a field separator of a newline
a staging table with several columns of varchars
Then process the staging table to extract the values (and types) that you want. There are probably other options, if you set up a format file, but that adds another level of complexity.
I would read the table into a staging table with one line per row in the table. Then, I would:
use window functions to assign a record number to rows, based on the "content start" lines
aggregate based on the record number
extract the values using aggregations, string functions, and conversions

SQL Azure raise 40197 error (level 20, state 4, code 9002)

I have a table in a SQL Azure DB (s1, 250Gb limit) with 47.000.000 records (total 3.5Gb). I tried to add a new calculated column, but after 1 hour of script execution, I get: The service has encountered an error processing your request. Please try again. Error code 9002 After several tries, I get the same result.
Script for simple table:
create table dbo.works (
work_id int not null identity(1,1) constraint PK_WORKS primary key,
client_id int null constraint FK_user_works_clients2 REFERENCES dbo.clients(client_id),
login_id int not null constraint FK_user_works_logins2 REFERENCES dbo.logins(login_id),
start_time datetime not null,
end_time datetime not null,
caption varchar(1000) null)
Script for alter:
alter table user_works add delta_secs as datediff(second, start_time, end_time) PERSISTED
Error message:
9002 sql server (local) - error growing transactions log file.
But in Azure I can not manage this param.
How can I change my structure in populated tables?
Azure SQL Database has a 2GB transaction size limit which you are running into. For schema changes like yours you can create a new table with the new schema and copy the data in batches into this new table.
That said the limit has been removed in the latest service version V12. You might want to consider upgrading to avoid having to implement a workaround.
Look at sys.database_files by connecting to the user database. If the log file current size reaches the max size then you hit this. At this point either you have to kill the active transactions or update to higher tiers (if this is not possible because of the amount of data you modifying in a single transaction).
You can also get the same by doing:
DBCC SQLPERF(LOGSPACE);
Couple ideas:
1) Try creating an empty column for delta_secs, then filling in the data separately. If this still results in txn log errors, try updating part of the data at a time with a WHERE clause.
2) Don't add a column. Instead, add a view with the delta_secs column as a calculated field instead. Since this is a derived field, this is probably a better approach anyway.
https://msdn.microsoft.com/en-us/library/ms187956.aspx

Unknow database [...] when selecting the database (Ubuntu)

I'm trying to work with MySQL on my laptop (Ubuntu) and always that I have to export a .sql file to database, the console show me the same message: "unknow database Spotify (for example) when selecting the database".
The sql script is correct, and must work, but always show the same message; any solution?
CREATE DATABASE Spotify;
USE Spotify ;
DROP TABLE IF EXISTS Spotify.Usuarios ;
CREATE TABLE IF NOT EXISTS Spotify.Usuarios
(
iduser INT NULL ,
user VARCHAR(10) NULL ,
password VARCHAR(45) NULL ,
reg VARCHAR(45) NULL
) ENGINE = InnoDB;
Finally, I solved it: there was a problem with Ubuntu packages, and the mysql's installation didn't finished correctly

Resources