I am trying to create array in groovy where multiple dates can be stored.
e.g : I am checking availability in various dates and if found available, I want to add it to my array.
So basically I have while loop with a variable which decides how many times the loop iterate. Array length should be equal to the number of time loop iterate.
The expected result should look like :
"2019/12/02","2019/12/03","2019/12/04"
My Code :
def SeriesDaysNumber = context.expand('${#Project#SeriesDaysNumber}')
def dates = new String[SeriesDaysNumber]
while(isAvailable == false)
{
log.info "Inside While loop"
for (i = 0; i < SeriesDaysNumber.toInteger(); i++)
{
// some DB query to check availability
if(res[0].toString() == '0')
{
isAvailable = true
seriesEndDate2 = "${SeriesEndDate1.format(outputDateFormatSeries)}"
context.testCase.testSuite.project.setPropertyValue('SeriesEndDate', seriesEndDate2)
// this date is available so add it to array, here i use zero index but i want it dynamically.
dates[i] = seriesEndDate2.toString()
log.info "dates : " + dates
use(TimeCategory)
{
// here i am incrementing the date
}
}
}
}
Actual Result
Mon Dec 02 15:24:56 IST 2019:INFO:dates : [2019/12/02, 2019/12/05, 2019/12/08, 2019/12/11, 2019/12/14, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null]
There are 4 things I need to do to achieve the expected result :
dates[i] instead of dates[0]
To store all dates in an array.
def dates = new String[SeriesDaysNumber.toInteger()]
To resolve the null issue.
dates[i] = "\""+seriesEndDate2.toString()+"\""
To put dates into double quotes.
log.info "dates : " + dates.join(", ") To remove square brackets from array.
Related
I am trying to a build an SQLite database from an existing CSV file following the steps from this answer. However I already get an error in the second step where the database should be created from the defined table with
sqlite3 worldcities.db < worldcities.sql
The error message reads
Error: near line 1: near ")": syntax error
My worldcities.sql file looks like this:
CREATE TABLE worldcities(
city TEXT NOT NULL,
city_ascii TEXT NOT NULL,
lat REAL NOT NULL,
lng REAL NOT NULL,
country TEXT NOT NULL,
iso2 TEXT NOT NULL,
iso3 TEXT NOT NULL,
admin_name TEXT NOT NULL,
capital TEXT NOT NULL,
population INT NOT NULL,
id INT NOT NULL,
);
What did I do wrong here?
I suspect the orphaned , just before your )to cause your problem.
I tried here https://extendsclass.com/sqlite-browser.html
this
CREATE TABLE newtoy2 (MID INT, BID INT);
without a problem.
But this, having the same problem I suspect in your code
CREATE TABLE newtoy1 (MID INT, BID INT,);
gets the same error you quote: near ")": syntax error
Which confirms my suspicion.
There was an extra comma ',' before end of block.
Commas are used after all column description but not after the last column description.
Try and Run this below code.
create table worldcities(
city TEXT NOT NULL,
city_ascii TEXT NOT NULL,
lat REAL NOT NULL,
lng REAL NOT NULL,
country TEXT NOT NULL,
iso2 TEXT NOT NULL,
iso3 TEXT NOT NULL,
admin_name TEXT NOT NULL,
capital TEXT NOT NULL,
population INT NOT NULL,
id INT NOT NULL);
I am making an Electron application, which uses an SQLite Database. Inside of the app's directory, I have a folder named db and as a child, another one - seeder. In that seeder directory there are a bunch of .sql files, needed for creating the db architecture and seeding the database with information needed:
Electron App/
|---main.js
|---index.html
|---node_modules
|---package.json
|---db/
|---seed.js
|---seeder/
|---bunch of .sql files
When I run seed.js it is supposed to get all filenames inside of seeder/ and run every single one successively. The files are named as follows:
01.tables.sql
02.countries.sql
03.ekatte.sql - (this one contains all the villages and cities in Bulgaria)
04.programmes.sql
05.schedules.sql
06.subjects.sql
Everything works as expected, except one thing: 01.tables.sql is supposed to create all the tables, and all other files contain INSERT INTO statements. Some of the tables are not created and when the Electron app tries to INSERT INTO them - sqlite returns an error: no such table (but only for oblasti, programmes, schedules and subjects).
Here are the contents of 01.table.sql:
CREATE TABLE countries (
countryID integer NOT NULL CONSTRAINT countries_pk PRIMARY KEY,
countryName text NOT NULL,
countryISO text NOT NULL
);
CREATE TABLE grades (
gradeID integer NOT NULL CONSTRAINT grades_pk PRIMARY KEY,
gradeValue integer NOT NULL,
gradeStudent integer NOT NULL,
gradeSchedule integer NOT NULL
);
CREATE TABLE oblasti (
oblastID integer NOT NULL CONSTRAINT oblasti_pk PRIMARY KEY,
oblastName text NOT NULL,
oblastCode text NOT NULL
);
CREATE TABLE obshtini (
obshtinaID integer NOT NULL CONSTRAINT obshtini_pk PRIMARY KEY,
obshtinaName text NOT NULL,
obshtinaOblast integer NOT NULL
);
CREATE TABLE programmes (
programmeID integer NOT NULL CONSTRAINT programmes_pk PRIMARY KEY,
programmeName text NOT NULL
);
CREATE TABLE schedules (
scheduleID integer NOT NULL CONSTRAINT schedules_pk PRIMARY KEY,
scheduleProgramme integer NOT NULL,
scheduleSemester integer NOT NULL,
scheduleSubject integer NOT NULL
);
CREATE TABLE selishta (
selishteID integer NOT NULL CONSTRAINT selishta_pk PRIMARY KEY,
selishteName text NOT NULL,
selishteObshtina integer NOT NULL,
selishteOblast integer NOT NULL
);
CREATE TABLE students (
studentID integer NOT NULL CONSTRAINT students_pk PRIMARY KEY,
studentName text NOT NULL,
studentFamily text NOT NULL,
studentPicture blob NOT NULL,
studentBirthplace integer,
studentForeignCountry integer,
studentForeignOblast text,
studentForeignObshtina text,
studentForeignSelishte text,
studentEGN integer NOT NULL,
studentAddress text NOT NULL,
studentPhone text NOT NULL,
studentNationality integer NOT NULL,
studentEduForm integer NOT NULL,
studentOKS integer NOT NULL,
studentSemester integer NOT NULL,
studentClass integer NOT NULL,
studentFakNomer integer NOT NULL,
studentStatus integer NOT NULL,
studentValid integer NOT NULL
);
CREATE TABLE subjects (
subjectID integer NOT NULL CONSTRAINT subjects_pk PRIMARY KEY,
subjectName text NOT NULL
);
and this is the part of seed.js, responsible for the actual seeding:
const db = new sqlite3.Database('db/database.db');
fs.readdir('db/seeder/', function(err, dir) {
let fileList = new Array();
$.each(dir, function(index, fileName){
fileList.push(fileName);
});
fileList.sort();
seedDB(fileList);
});
function seedDB(files){
if(files.length > 0){
let fileName = files.shift();
fs.readFile('db/seeder/'+fileName, 'utf-8', function(err, data){
db.serialize(function() {
db.run(data);
});
seedDB(files);
});
}
}
Below are two proc running in sybase. The first has param name with value, the second only value. The first one runs fine, but when I run the second I get Implicit conversion from datatype 'INT' to 'VARCHAR' is not allowed. Use the CONVERT function to run this query Can someone tell me why?
First:
exec pu #a=null, #b=null, #c=null, #d=null, #e=null, #f=null, #g='2013-Jun-12 22:10:00.670', #h=100, #i=2, #j=null, #k=null, #l=null, #m=null, #n=0, #o='P', #p=null, #q=null, #r=null, #s=null, #t='junit', #u=null, #v=null, #w=null
Second:
exec pu ( null, null, null, null, null, null, '2013-Jun-12 22:10:00.187', 100, 2, null, null, null, null, 0, 'P', null, null, null, null, 'junit', null, null, null )
What could also be happening is that the parameters are not defined in the same as order as you are expecting them.
In scenario 1, you won't see a problem since parameters are referenced by name.
In scenario 2 however you are using positional reference which could be exposing this issue.
Please check the proc definition and make sure parameters are declared in same order as expected.
I have the following two tables in SQL Server 2008
TABLE [JobUnit](
[idJobUnit] [int] IDENTITY(1,1) NOT NULL,
[Job_idJob] [int] NOT NULL, // Foreign key here
[UnitStatus] [tinyint] NOT NULL, // can be (0 for unprocessed, 1 for processing, 2 for processed)
)
TABLE [Job](
[idJob] [int] IDENTITY(1,1) NOT NULL,
[JobName] [varchar(50)] NOT NULL,
)
Job : JobUnit is one-to-many relationship
I am trying to write an efficient store procedure that would replace the following LINQ statement
public enum UnitStatus{
unprocessed,
processing,
processed,
}
int jobId = 10;
using(EntityFramework context = new EntityFramework())
{
if (context.JobUnits.Where(ju => ju.Job_idJob == jobId)
.Any(ju => ju.UnitStatus == (byte)UnitStatus.unproccessed))
{
// Some JobUnit is unprocessed
return 1;
}
else
{
// There is no unprocessed JobUnit
if (context.JobUnits.Where(ju => ju.Job_idJob == jobId) //
.Any(ju => ju.UnitStatus == (byte)UnitStatus.processing))
{
// JobUnit has some unit that is processing, but none is unprocessed
return 2;
}
else
{
// Every JoUnit is processed
return 3;
}
}
}
Thanks for reading
So really, you're just looking for the lowest state of all the units in a particular job?
CREATE PROCEDURE GetJobState #jobId int AS
SELECT MIN(UnitStatus)
FROM JobUnit
WHERE Job_idJob = #jobId
I should also say you could use this approach just as easly in Linq.
So basically we have lots of SharePoint usage log files generated by our SharePoint 2007 site and we would like to make sense of them. For that we're thinking of reading the log files and dumping into a database with the appropriate columns and all. Now I was going to make an SSIS package to read all the text files and extract the data when I came across LogParser. Is there a way to use LogParser to dump data into an Sql Server database or the SSIS way is better? Or is there any other better way to use the SharePoint usage logs?
This is the script we use to load IIS log files in a SQL Server database:
LogParser "SELECT * INTO <TABLENAME> FROM <LogFileName>" -o:SQL -server:<servername> -database:<databasename> -driver:"SQL Server" -username:sa -password:xxxxx -createTable:ON
The <tablename>, <logfilename>, <servername>, <databasename> and sa password need to be changed according to your specs.
From my experience LogParser works really well to load data from IIS logs to SQL Server, so a mixed approach is the best:
Load raw data from IIS log to SQL Server using LogParser
Use SSIS to extract and manipulate data from the temporary table containing the raw data in the final table you'll use for reporting.
You'll have to write a plugin to logparser. Here is what I did:
[Guid("1CC338B9-4F5F-4bf2-86AE-55C865CF7159")]
public class SPUsageLogParserPlugin : ILogParserInputContext
{
private FileStream stream = null;
private BinaryReader br = null;
private object[] currentEntry = null;
public SPUsageLogParserPlugin() { }
#region LogParser
protected const int GENERAL_HEADER_LENGTH = 300;
protected const int ENTRY_HEADER_LENGTH = 50;
protected string[] columns = {"TimeStamp",
"SiteGUID",
"SiteUrl",
"WebUrl",
"Document",
"User",
"QueryString",
"Referral",
"UserAgent",
"Command"};
protected string ReadString(BinaryReader br)
{
StringBuilder buffer = new StringBuilder();
char c = br.ReadChar();
while (c != 0) {
buffer.Append(c);
c = br.ReadChar();
}
return buffer.ToString();
}
#endregion
#region ILogParserInputContext Members
enum FieldType
{
Integer = 1,
Real = 2,
String = 3,
Timestamp = 4
}
public void OpenInput(string from)
{
stream = File.OpenRead(from);
br = new BinaryReader(stream);
br.ReadBytes(GENERAL_HEADER_LENGTH);
}
public int GetFieldCount()
{
return columns.Length;
}
public string GetFieldName(int index)
{
return columns[index];
}
public int GetFieldType(int index)
{
if (index == 0) {
// TimeStamp
return (int)FieldType.Timestamp;
} else {
// Other fields
return (int)FieldType.String;
}
}
public bool ReadRecord()
{
if (stream.Position < stream.Length) {
br.ReadBytes(ENTRY_HEADER_LENGTH); // Entry Header
string webappguid = ReadString(br);
DateTime timestamp = DateTime.ParseExact(ReadString(br), "HH:mm:ss", null);
string siteUrl = ReadString(br);
string webUrl = ReadString(br);
string document = ReadString(br);
string user = ReadString(br);
string query = ReadString(br);
string referral = ReadString(br);
string userAgent = ReadString(br);
string guid = ReadString(br);
string command = ReadString(br);
currentEntry = new object[] { timestamp, webappguid, siteUrl, webUrl, document, user, query, referral, userAgent, command };
return true;
} else {
currentEntry = new object[] { };
return false;
}
}
public object GetValue(int index)
{
return currentEntry[index];
}
public void CloseInput(bool abort)
{
br.Close();
stream.Dispose();
stream = null;
br = null;
}
#endregion
}
If you want more in-depth reporting and have the cash and computer power you could look at Nintex Reporting. I've seen a demo of it and it's very thorough, however it needs to continuously run on your system. Looks cool though.
This is the blog post I used to get all the info needed.
It is not necessary to go to the length of custom code.
In brief, create table script:
CREATE TABLE [dbo].[STSlog](
[application] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[date] [datetime] NULL,
[time] [datetime] NULL,
[username] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[computername] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[method] [varchar](16) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[siteURL] [varchar](2048) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[webURL] [varchar](2048) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[docName] [varchar](2048) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[bytes] [int] NULL,
[queryString] [varchar](2048) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[userAgent] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[referer] [varchar](2048) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,
[bitFlags] [smallint] NULL,
[status] [smallint] NULL,
[siteGuid] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL
) ON [PRIMARY]
Call to make log parser load in the data for a file
"C:\projects\STSLogParser\STSLogParser.exe" 2005-01-01 "c:\projects\STSlog\2005-01-01\00.log" c:\projects\logparsertmp\stslog.csv
"C:\Program Files\Log Parser 2.2\logparser.exe" "SELECT 'SharePointPortal' as application, TO_DATE(TO_UTCTIME(TO_TIMESTAMP(TO_TIMESTAMP(date, 'yyyy-MM-dd'), TO_TIMESTAMP(time, 'hh:mm:ss')))) AS date, TO_TIME( TO_UTCTIME( TO_TIMESTAMP(TO_TIMESTAMP(date, 'yyyy-MM-dd'), TO_TIMESTAMP(time, 'hh:mm:ss')))), UserName as username, 'SERVERNAME' as computername, 'GET' as method, SiteURL as siteURL, WebURL as webURL, DocName as docName, cBytes as bytes, QueryString as queryString, UserAgent as userAgent, RefURL as referer, TO_INT(bitFlags) as bitFlags, TO_INT(HttpStatus) as status, TO_STRING(SiteGuid) as siteGuid INTO STSlog FROM c:\projects\logparsertmp\stslog.csv WHERE (username IS NOT NULL) AND (TO_LOWERCASE(username) NOT IN (domain\serviceaccount))" -i:CSV -headerRow:ON -o:SQL -server:localhost -database:SharePoint_SA_IN -clearTable:ON
Sorry I found out that Sharepoint Logs are not the same as IIS logs. They are different. How can we parse them?