I generate DACPAC files from automated builds with a generated version number. While useful during sqlpackager operations I need to be able to determine the version number of a DACPAC before doing anything with the file.
What tooling can I use (automated of course) to query the DACPAC file for its version number and description?
Hey I know you found a solution but I have an alternative method that may help someone else.
By referencing Microsoft.SqlServer.Management.Dac.dll and using the DacType class:
using System.IO;
using Microsoft.SqlServer.Management.Dac;
(Not entirely sure which using statements are needed - I have copied for a larger DAC helper file)
using (Stream dacPackFileStream = File.Open(this.dacPackFileName, FileMode.Open))
{
var dacType = DacType.Load(dacPackFileStream);
dacPackFileStream.Close();
return dacType.Version;
}
DACPAC files are actually zip files. Extract the zip and query the file DacMetaData which is a xml file. Use the XPath: /DacType/Version
Related
I use Java 8 and my condition is to download multiple file in a remote server using sftp protocol, is not necessary to filter file for his name but necessary to download all file in a specific remote folder.
i see the library com.hierynomus ยป sshj for this scope, but looking on the net i haven't found what i need, only for download a single file.
What i think is i could use this method,
String localDir = "/home";
String remoteFile = "/home/folder/*"
SSHClient sshClient = setupSshj();
SFTPClient sftpClient = sshClient.newSFTPClient();
sftpClient.get(remoteFile, localDir);
but i'm not sure if the asterisk in the "remoteFile" will be useful for my purpose...
Unfortunately for now i can't try this on remote server ...
Someone can help me?
Thank's everyone
You need to LIST all the files you want to download:
List<RemoteResourceInfo> entries = sftpClient.ls("/home/folder")
After that you will loop the entries to download them one by one:
for (RemoteResourceInfo remoteFile : entries) {
if(remoteFile.isRegularFile()){
sftpClient.get(remoteFile.getPath(), localDir);
}
}
E: Also you should check if the list entry is really a file, edited the code accordingly. Though I am not sure whether using !remoteFile.isDirectory() would be better.
I have following C# code in a console application.
Whenever I debug the application and run the query1 (which inserts a new value into the database) and then run query2 (which displays all the entries in the database), I can see the new entry I inserted clearly. However, when I close the application and check the table in the database (in Visual Studio), it is gone. I have no idea why it is not saving.
using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Data.SqlServerCe;
using System.Data;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
try
{
string fileName = "FlowerShop.sdf";
string fileLocation = "|DataDirectory|\\";
DatabaseAccess dbAccess = new DatabaseAccess();
dbAccess.Connect(fileName, fileLocation);
Console.WriteLine("Connected to the following database:\n"+fileLocation + fileName+"\n");
string query = "Insert into Products(Name, UnitPrice, UnitsInStock) values('NewItem', 500, 90)";
string res = dbAccess.ExecuteQuery(query);
Console.WriteLine(res);
string query2 = "Select * from Products";
string res2 = dbAccess.QueryData(query2);
Console.WriteLine(res2);
Console.ReadLine();
}
catch (Exception e)
{
Console.WriteLine(e);
Console.ReadLine();
}
}
}
class DatabaseAccess
{
private SqlCeConnection _connection;
public void Connect(string fileName, string fileLocation)
{
Connect(#"Data Source=" + fileLocation + fileName);
}
public void Connect(string connectionString)
{
_connection = new SqlCeConnection(connectionString);
}
public string QueryData(string query)
{
_connection.Open();
using (SqlCeDataAdapter da = new SqlCeDataAdapter(query, _connection))
using (DataSet ds = new DataSet("Data Set"))
{
da.Fill(ds);
_connection.Close();
return ds.Tables[0].ToReadableString(); // a extension method I created
}
}
public string ExecuteQuery(string query)
{
_connection.Open();
using (SqlCeCommand c = new SqlCeCommand(query, _connection))
{
int r = c.ExecuteNonQuery();
_connection.Close();
return r.ToString();
}
}
}
EDIT: Forgot to mention that I am using SQL Server Compact Edition 4 and VS2012 Express.
It is a quite common problem. You use the |DataDirectory| substitution string. This means that, while debugging your app in the Visual Studio environment, the database used by your application is located in the subfolder BIN\DEBUG folder (or x86 variant) of your project. And this works well as you don't have any kind of error connecting to the database and making update operations.
But then, you exit the debug session and you look at your database through the Visual Studio Server Explorer (or any other suitable tool). This window has a different connection string (probably pointing to the copy of your database in the project folder). You search your tables and you don't see the changes.
Then the problem get worse. You restart VS to go hunting for the bug in your app, but you have your database file listed between your project files and the property Copy to Output directory is set to Copy Always. At this point Visual Studio obliges and copies the original database file from the project folder to the output folder (BIN\DEBUG) and thus your previous changes are lost.
Now, your application inserts/updates again the target table, you again can't find any error in your code and restart the loop again until you decide to post or search on StackOverflow.
You could stop this problem by clicking on the database file listed in your Solution Explorer and changing the property Copy To Output Directory to Copy If Newer or Never Copy. Also you could update your connectionstring in the Server Explorer to look at the working copy of your database or create a second connection. The first one still points to the database in the project folder while the second one points to the database in the BIN\DEBUG folder. In this way you could keep the original database ready for deployment purposes and schema changes, while, with the second connection you could look at the effective results of your coding efforts.
EDIT Special warning for MS-Access database users. The simple act of looking at your table changes the modified date of your database ALSO if you don't write or change anything. So the flag Copy if Newer kicks in and the database file is copied to the output directory. With Access better use Copy Never.
Committing changes / saving changes across debug sessions is a familiar topic in SQL CE forums. It is something that trips up quite a few people. I'll post links to source articles below, but I wanted to paste the answer that seems to get the best results to the most people:
You have several options to change this behavior. If your sdf file is part of the content of your project, this will affect how data is persisted. Remember that when you debug, all output of your project (including the sdf) if in the bin/debug folder.
You can decide not to include the sdf file as part of your project and manage the file location runtime.
If you are using "copy if newer", and project changes you make to the database will overwrite any runtime/debug changes.
If you are using "Do not copy", you will have to specify the location in code (as two levels above where your program is running).
If you have "Copy always", any changes made during runtime will always be overwritten
Answer Source
Here is a link to some further discussion and how to documentation.
I am trying to use Hadoop in java with multiple input files. At the moment I have two files, a big one to process and a smaller one that serves as a sort of index.
My problem is that I need to maintain the whole index file unsplitted while the big file is distributed to each mapper. Is there any way provided by the Hadoop API to make such thing?
In case if have not expressed myself correctly, here is a link to a picture that represents what I am trying to achieve: picture
Update:
Following the instructions provided by Santiago, I am now able to insert a file (or the URI, at least) from Amazon's S3 into the distributed cache like this:
job.addCacheFile(new Path("s3://myBucket/input/index.txt").toUri());
However, when the mapper tries to read it a 'file not found' exception occurs, which seems odd to me. I have checked the S3 location and everything seems to be fine. I have used other S3 locations to introduce the input and output file.
Error (note the single slash after the s3:)
FileNotFoundException: s3:/myBucket/input/index.txt (No such file or directory)
The following is the code I use to read the file from the distributed cache:
URI[] cacheFile = output.getCacheFiles();
BufferedReader br = new BufferedReader(new FileReader(cacheFile[0].toString()));
while ((line = br.readLine()) != null) {
//Do stuff
}
I am using Amazon's EMR, S3 and the version 2.4.0 of Hadoop.
As mentioned above, add your index file to the Distributed Cache and then access the same in your mapper. Behind the scenes. Hadoop framework will ensure that the index file will be sent to all the task trackers before any task is executed and will be available for your processing. In this case, data is transferred only once and will be available for all the tasks related your job.
However, instead of add the index file to the Distributed Cache in your mapper code, make your driver code to implement ToolRunner interface and override the run method. This provides the flexibility of passing the index file to Distributed Cache through the command prompt while submitting the job
If you are using ToolRunner, you can add files to the Distributed Cache directly from the command line when you run the job. No need to copy the file to HDFS first. Use the -files option to add files
hadoop jar yourjarname.jar YourDriverClassName -files cachefile1, cachefile2, cachefile3, ...
You can access the files in your Mapper or Reducer code as below:
File f1 = new File("cachefile1");
File f2 = new File("cachefile2");
File f3 = new File("cachefile3");
You could push the index file to the distributed cache, and it will be copied to the nodes before the mapper is executed.
See this SO thread.
Here's what helped me to solve the problem.
Since I am using Amazon's EMR with S3, I have needed to change the syntax a bit, as stated on the following site.
It was necessary to add the name the system was going to use to read the file from the cache, as follows:
job.addCacheFile(new URI("s3://myBucket/input/index.txt" + "#index.txt"));
This way, the program understands that the file introduced into the cache is named just index.txt. I also have needed to change the syntax to read the file from the cache. Instead of reading the entire path stored on the distributed cache, only the filename has to be used, as follows:
URI[] cacheFile = output.getCacheFiles();
BufferedReader br = new BufferedReader(new FileReader(#the filename#));
while ((line = br.readLine()) != null) {
//Do stuff
}
Is there any way using which I can write output to a csv file using selenium webdriver.
Please help
I believe you want to store your Test results into CSV which can be shown to your fellow members, so you may firstly consider clubbing Selenium with one the of Test frameworks like JUNit, TestNG etc. So after every test is run you can store the values in a common results csv for a particular suite. As #AbhijeetVaikar said, you just need to make use of Java file handling to store out the output you want to.
For example on reading CSV files you can refer this
Library link - http://sourceforge.net/projects/opencsv/files/opencsv/
Example code
String csv = "C:\\output.csv";
CSVWriter writer = new CSVWriter(new FileWriter(csv));
String [] country = "India#China#United States".split("#");
writer.writeNext(country);
writer.close();
I have developed a ETL which is consuming flat files. The size of flat files varies from 250 MB - 300 MB.
It is working absoultely fine when file present in the folder. But it fails when the file is in generation mode.
Ex: This ETL package runs from 8 AM to 10 AM to check whether the file is present in the folder or not. Now, at any instance(let say 9 AM) if the file is starting generated and till now it is 10 MB. ETL start processing the file and just hang and fail after 4-5 min ( hang at script task which is reading that the file is present in the folder or not).
What is the best way to trigger SSIS package only when the file generation is completely done?
Note: I have no control over the file generation.
Add a For Loop Container with a Boolean variable bFileAccessible:
The Init expression is #bFileAccessible=False
The Eval expression is #bFileAccessible==False
Inside the For Loop Container add a Script Task with a ReadWriteVariable User::bFileAccessible and the following C# script (showing only the Main() method):
public void Main()
{
try
{
using (Stream stream = new FileStream("Path\to\your\file", FileMode.Open))
{
Dts.Variables["bFileAccessible"].Value = true;
}
}
catch
{
Dts.Variables["bFileAccessible"].Value = false;
}
Dts.TaskResult = (int)ScriptResults.Success;
}
You should also use a variable for the filename and maybe a little wait interval. For more information about the script see here.
Check the FIle modified time everytime and comapre the same with previous one....
it's not good logic but a good idea if no perfect alternative