What is DBML storage attribute in LINQ to SQL? - database

I am currently working on a project that uses LINQ to SQL for database access. It has become necessary for me to manually update the DBML file by right-clicking on it and opening it with an XML editor because I do not want to re-generate the file and lose all of the changes that have been made to association member names.
Can someone please explain to me what the storage attribute is used for in the Association element of the DBML file? I have searched this forum and Google to no avail. The storage attribute is not present in every Association element. I have included XML in my DBML that both includes and excludes the storage attribute below:
<Association Name="Customer_WorkOrder" Member="Customer" ThisKey="CustomerId" OtherKey="Id" Type="Customer" IsForeignKey="true" />
<Association Name="Sycode_WorkOrder" Member="WorkOrderOrderStatus" Storage="_Sycode" ThisKey="OrderStatus" OtherKey="recno" Type="Sycode" IsForeignKey="true" />

http://msdn.microsoft.com/en-us/library/system.data.linq.mapping.dataattribute.storage.aspx
Gets or sets a private storage field to hold the value from a column.
If there is no value set, it generates the private field like "_" + AssociationName, otherwise it uses the "storage" value. It is a bit confusing, since usually the "storage" term refers to the database and not to the generated code.

Related

Set schema name in SqlUserDefinedTypeAttribute

When creating a UserDefinedType in C# code for the sake of SQLCLR integration, it is required that you prefix a class or struct with a SqlUserDefinedType, such as here:
[SqlUserDefinedType(
Name = "g.Options",
// ...
)]
public struct Options : INullable {
// ...
}
Notice that in the "Name" parameter, I attempt to set a schema in addition to the object name. But, when I generate the script in the publish stage of a Visual Studio Database Project, I get:
CREATE TYPE [dbo].[g.Options]
There is no "schema" parameter for SqlUserDefinedType.
I do believe I can write the T-SQL script to make the type from the assembly specifically, but I would like to avoid that, as I plan on putting most of my types in different schemas and wouldn't be happy to have to register via explicit TSQL on each one.
EDIT:
As Solomon Rutzky points out, you can set the Default Schema in the project properties. It is certainly no substitute for something akin to a 'schema' parameter in SqlUserDefinedType, particularly if you want to work with multiple schemas, but it certainly gets the job done for many people's needs.
A post-deployment script will technically get the job done, but unfortunately, the comparison engine doesn't know about the post-deployment logic and so will perpetually register the schema difference as something that needs to be changed. So all your affected objects will be dropped and re-created on every publish regardless of whether you changed them or not.
The Schema name is specified in a singular location per each project, not per object.
You can set it in Visual Studio via:
"Project" (menu) -> "{project_name} Properties..." (menu option) -> "Project Settings" (tab)
On the right side, in the "General" section, there is a text field for "Default schema:"
OR:
you can manually edit your {project_name}.sqlproj file, and in one of the top <PropertyGroup> elements (one that does not have a "Condition" attribute; the first such element is typically used), you can create (or update if it already exists) the following element:
<DefaultSchema>dbo</DefaultSchema>
HOWEVER, if you are wanting to set one object (such as a UDT) to a different Schema name than the rest of the objects are using, that would have to be done manually in a Post Release SQL script. You can add a SQL script to your project, then in the Solution Explorer on the right side, select the SQL script, go to its Properties, and for "BuildAction", select "PostDeploy". In that post-deploy script, issue an ALTER SCHEMA statement:
ALTER SCHEMA [g] TRANSFER TYPE::dbo.Options;

Add data from other object within SSIS package to populate a field for a table

There are many aspects of what I want to do but I think learning one piece will let me derive the rest.
I have an SSIS package that uses powershell to download a publicly available zip file, an execute script to unzip with 7zip and then data flows to load the unzipped files to corresponding tables.
What I want to do is add the file name (and eventually other aspects of the file like creation date, record counts and so on) from any one of the unzipped files to a log table that keeps track of the summary level details of the files.
How do I dynamically store this type of information as part of the package? Derived columns? But what's the input? Thanks!
There are many options for dynamically working with files through SSIS. Below is an overview of one method. Of course this can vary, depending on your specific needs and requirements.
Add a Foreach Loop Container. On the Collection pane, the Folder property can either be set using the
GUI as well as through a parameter or variable with the Directory
expression. Searching sub folders can also be set by checking the "Traverse subfolders" checkbox or using the Recurse expression like the Folder field.
The Files field will indicate the files to use and wildcards can be
used. * will match any number of characters. For
example, *.csv will get all csv files regardless of name and
Test*.txt will return all .txt files with names beginning Test,
regardless of how many or which characters follow. To limit this to
a single character, use ?. The FileSpec expression will allow
this to be set dynamically similar to the directory by variable or parameter.
The Variable Mappings pane will allow for setting a variable to hold a file name from the directory. Add a variable that will hold the file name to index 0 to map these.
You indicated that you wanted to store the file name. The detail of this can be controlled from the "Retrieve file name" field on the Collection window. As their names imply, Fully Qualified will hold the complete file path, Name and Extension will return the file name with extension, and Name Only is just the file name.
As for other aspects of the file, I'd recommend a using a Script Task for this for more functionality. The C# FileInfo class provides options for finding details about the file such as the creation date, last time the file was accessed, and when the file was most recently written to. Additonal information on this can be found here.
For the record counts from the file, you'll need to create a Connection Manager for this and work with the data within the package. I'm assuming these are flat files? If so, creating a Flat File Connection Manager, and setting the same variable from the Variable Mappings pane of the Foreach Loop to the ConnectionString expression of the Connection Manager will allow you to dynamically loop through each file. Make sure that the Fully Qualified option is used for the "Retrieve file name" field as earlier if you decide to do this. You will also want to configure the correct columns and data types for the Connection Manager ahead of time. This same process can be followed for Excel files, however the variable with the file name will be used on the ExcelFilePath expression instead.
As for storing information about a file in a log table, there are a multitude of options for these. A very basic example of an Insert statement within an Execute SQL Task that's placed within the Foreach Loop is below. The 3 part table name is only necessary if you're using a table that differs from the initial catalog of the Connection Manager. The ? is the parameter marker (assuming this is an OLE DB connection). After this, map the same variable/parameter that stores the file name using the Parameter Mapping pane. Set the direction to Input, appropriate data type (likely VARCHAR/NVARCHAR), 0 in the Parameter Name field to indicate this is the first parameter in the SQL statement (additional ? can be used for subsequent parameters in the SQL statement, then increment this field in accordance), and the default Parameter Size can be left at -1. Again, this is a simple example and you'll probably want store more about the files and their contents, but this can get you started.
Sample SQL Insert:
INSERT INTO YourDataBase.YourSchema.YourTable (ColumnToHoldFileName)
VALUES (?)
you can use Variable to store File name when your loop the files, and after file been loaded to table, then u can use current file name to insert/update log table.
figured it out from looking at other posts. I had to expand the parameter size...easy fix!

Epicor asking for password after making a table change

Epicor - what a beastly creature!
Epicor asking for password after making a table change, any idea why?!?!
We removed the relationship from the (part table) and set up a criteria, instead. Now it is asking for a password, which should not be happening.
the login happens when I try to run the report. I am trying to figure out what I did to aggravate Epicor. The table was already there. I removed the relationship (part table) and added a criteria, instead, otherwise, that is exactly what I would have done. The only reason that I did not add a table to a report data definition, like I originally wanted to is because the parts table could only be added once. Which is why I removed the relationship and added a criteria, instead.
From your description, it sounds like the problem is related to the xml generated by Epicor for a non-BAQ based report data definition. Crystal and SSRS reports ask login information when either there is more than one datasource is referenced in the report, or there is improper relationships defined.
Note:
If you are not a report developer and you have modified this in an attempt to change the end data, I recommend you contact the report developer responsible for maintaining these before proceeding. Otherwise, read on.
Based on my experience, I would say if you are confident in the new relationship structure you have in the report data definition, the solution to this problem is likely within the report itself. Generate an xml file by running a test report, then open the .rpt (or .rdl) associated with this report and set the datasource to the new xml file. This should update the new xml schema used as the datasource. Even if none of the fields were changed in the data definition, the datasource schema definition that is stored in these files define exactly the data formatting that the report expects to receive when it is opened by Epicor.
If that doesn't solve the problem and you are using Crystal, the xml relationships may be defined in a way that will effect the way the data is displayed, which can be adjusted by using database expert->links tab in crystal. You should reconnect all of the links to match the report data definition within Epicor.
If none of that works, open up and view the xml file.
It is not unheard of for report data definitions in Epicor to break behind the scenes when altering relationships, and the xml file generated by the test report may not be a fully-qualified xml file. I have seen many xml files that do not have elements closed, etc. that will cause various problems when attempting to run the report. In this case, my recommendation is to create a completely new report data definition (do not copy), and re-enter all of the parameters that existed in the former definition. Repeat the refreshing of the report datasource as described above and this problem should be fixed.

What purpose does a .DDF file have?

Hey can someone tell me what the Field, File and Index .ddf files do in pervasive. Do they have to changed or be updated when a table definition changes? Any insight would be GREATLY appreciated.
Cheers.
FILE.DDF links the underlying Btrieve Data files to a logical table name.
FIELD.DDF uses the File Id from FILE.DDF to define all of the fields including offsets, data types, etc for each table.
INDEX.DDF defines the indexes on the fields in FIELD.DDF.
They are the field information meta date used by PSQL to access the data files in a relation access method (ODBC, OLEDB, ADO.NET, etc).
They do have to be changed if the underlying data file is changed through Btrieve. If the table definition changes through SQL (like ALTER TABLE statements), the Pervasive Control Center, DTI (Distributed Tuning Interface), DTO (Distributed Tuning Object), PDAC, ActiveX, or DDF Builder then the DDFs are updated automatically.

Using CONTENT keyword while creating a table with an XML column from XML Schema Collection

While creating a table that has an XML type column, I am referring to a complex XML Schema Collection. When I specify the XML Schema, I have the option of mentioning either CONTENT or DOCUMENT keyword. The latter will ensure that the XML data is stored as a document in a single column.
According to a video tutorial the CONTENT will store the XML data in fragments.
Besides the above statement I don't find reference anywhere else regarding the usage of the CONTENT keyword and it's implication on schema & data.
I would like to know how the fragments are created and managed and whether and how they can be queried individually. Further, how the fragments are correlated. Next, when I amend the XML Schema Collection, what is the impact.
actually i think SQLServer 2005 XML is quite good documented.
CONTENT is the default and allows any valid XML. DOCUMENT is more specific and means that the XML-Data you can to store is only allowed to have a single Top-Level node.
Create:
CREATE TABLE XmlCatalog (
ID INT PRIMARY KEY,
Document XML(CONTENT myCollection))
Insert:
INSERT INTO XmlCatalog VALUES (2,
'<doc id="123">
<sections>
<section num="1"><title>XML Schema</title></section>
<section num="3"><title>Benefits</title></section>
<section num="4"><title>Features</title></section>
</sections>
</doc>')
Select:
SELECT xCol.query('/doc[#id = 123]//section')
FROM XmlCatalog
WHERE xCol.exist ('/doc[#id = 123]') = 1
...and so on. The query language exceeds more or less in a subset of xpath 1.0.
If you amend an XSD it is checked on Inserts and Updates and stored within the xml of each element. As far as i understand the doc it is also allowed to add multiple schemas for one column so that entries can reference to different schemas.
EDIT:
Ok, after reading the specific parts of the documentation i think i understand what your problem is. The reference isn't very clear on that point but as far as i understand it only Entries with one top level node can to be bound to XSD schemas.
Due to the fact that XSD-Schemas require a single top level node defining the used XSD file it won't be possible to validate fragments containing more than one top level element. I haven't tried but i think it can't be done.
However it seems to be valid to define a CONTENT column, amend an XSD and store both, XML with one top level node referencing the XSD as well as XML-fragments which will only checked for wellformedness. The fragments can be accessed using the XPath query language show in the select statement above.
I can't tell you much about performance implications. The reference mentions that XSDs are stored inline so this will need some extra space within the db. The XPath queries need to be executed too. Despite the fact that xpath usually is quite fast i guess it could decrease performace cause it needs to be performed on each row to get the result. To be sure i think you have to check the execution plan for your specific query depending on size and complexity of the stored xml as well as the xpath expression.

Resources