Setting only the ID in a Linq to SQL object - silverlight

I am using Linq to SQL in a two tier project where the server tier abstracts the DB and is using Linq to SQL.
In my client tier I construct an object and I send to the server.
I have a Task, which has a relationship to Reporter (who reported this task), so Task has a ReportedID column in the database, which is a FK to Reporter.ID.
In the Linq abstraction, my Task has a Reporter property and a ReportedID property.
To save new Tasks, I would like to use the ReportedID, so I have this code:
//Populate the object with the info
Task task = new Task();
task.Title = tbTitle.Text;
task.Description = tbDescription.Text;
task.Severity = ((Severity)lbSeverities.SelectedItem);
//the first state: "open"
task.StateID = 1;
//TODO - Set ReporterID
task.ReporterID = 1;
//Save the task
client.SaveTaskCompleted += new EventHandler<SaveTaskCompletedEventArgs>(client_SaveTaskCompleted);
client.SaveTaskAsync(App.Token, task);
So, the object is constructed and sent to the server, where it is saved using this code:
public Task SaveTask(string token, Task task)
{
TrackingDataContext dataConext = new TrackingDataContext();
//Saves/Updates the task
dataConext.Tasks.InsertOnSubmit(task);
dataConext.SubmitChanges();
return task;
}
The problem is that I get an exception: "An attempt was made to remove a relationship between a Reporter and a Task. However, one of the relationship's foreign keys (Task.ReporterID) cannot be set to null.".
If I use the Reporter property, it works.
What am I doing wrong?
Thank you,
Oscar

I made some refactory in my code and this error doesn't anymore. It may be a logic error but I can't tell exactly what.

Related

Dotmim.Sync is throwing exception when synchronizing existing SQLite with SQL Server databases

I get a Dotmim.Sync.SyncException when calling the agent.SynchronizeAsync(tables) function:
Exception: Seems you are trying another Setup tables that what is stored in your server scope database. Please make a migration or create a new scope
This is my code:
public static async Task SynchronizeAsync()
{
var serverProvider = new SqlSyncProvider(serverConnectionString);
// Second provider is using plain old Sql Server provider, relying on triggers and tracking tables to create the sync environment
var clientProvider = new SqliteSyncProvider(Path.Combine(FileSystem.AppDataDirectory, "treesDB.db3"));
// Tables involved in the sync process:
var tables = new string[] { "Trees" };
// Creating an agent that will handle all the process
var agent = new SyncAgent(clientProvider, serverProvider);
// Launch the sync process
var s1 = await agent.SynchronizeAsync(tables);
await agent.LocalOrchestrator.UpdateUntrackedRowsAsync();
var s2 = await agent.SynchronizeAsync();
}
I'm the author of Dotmim.Sync
Do not hesitate to to fill an issue on Github if you are still struggling.
Regarding your issue, I think you have made some tests with different tables.
You need to stick with a set of tables, because DMS needs to create different things (triggers / stored proc and so on)
If you want to test different setups, you need to define differents scopes.
You have a complete documentation on https://dotmimsync.readthedocs.io/

Grails 3 Runtime Automated Database Custom SQL

We have a large application with about 100 tables. We have been updating the database manually between releases.
We also recently switched from Grails 2 to Grails 3.
We now need the Application to (for the first time in a while) create a new Database from scratch, and this is the first time since using Grails 3.
The database is varied enough that it needs some manual runtime customization.
To complicate matters, the application is also using Quartz.
The problem is:
After the Application initializes the tables, but before the Application and Quartz start running, we need to inject this set of custom SQL. Where is the correct place to insert the SQL?
I have tried some things like Application.groovy and Bootstrap.groovy, but have not been able to determine (with Grails 3) the appropriate location to inject this custom SQL. For example the Quartz tasks are running and attempting to access certain tables, but they have not been "corrected" yet so the app is throwing errors.
UPDATE
I tried the following in Bootstrap.groovy
Tag.withTransaction {
String updateSQL = "ALTER TABLE tag DROP COLUMN class;"
def sql = new groovy.sql.Sql(dataSource)
sql.executeUpdate(updateSQL)
}
Tag.withTransaction {
Tag newTag
newTag = new Tag(name: 'TAG 1').save(flush: true)
}
Tag.withTransaction {
List tags = Tag.findAll()
println("=== Tag Size = ${tags.size()}")
}
The executeUpdate() throws an exception that column class cannot be found.
However if I reorder the three sections and use the following:
Tag.withTransaction {
Tag newTag
newTag = new Tag(name: 'TAG 1').save(flush: true)
}
Tag.withTransaction {
List tags = Tag.findAll()
println("=== Tag Size = ${tags.size()}")
}
Tag.withTransaction {
String updateSQL = "ALTER TABLE tag DROP COLUMN class;"
def sql = new groovy.sql.Sql(dataSource)
sql.executeUpdate(updateSQL)
}
then the executeUpdate() completes successfully (although it is still too late as the Quartz jobs are already running).
I do not understand this at all.
Thank you for the suggestions. For now I will try the Database Migration Plugin, but would still appreciate other suggestions.

Quartz.net configuration and scheduling

I have a WPF application where the user creates entities in the database. Each entity has some metadata and an interval field. For each entity I want to create a job with the interval provided and store them in the AdoJobStore.
Now since the WPF app will not always be running, I want a create a Windows Service that reads the jobs data from the AdoJobStore and run those jobs.
So essentially there are these 2 tiers. Now I have setup the Quartz tables already in my existing database. My question is:
How to create/edit/delete jobs from my WPF application
How to inform my windows service to run the jobs (every time an entity is created in database)
I have read through a lot of blogs but these 2 primary questions are a bit unclear to me. I would really appreciate some example code on how to achieve and may be structure my solution.
Thanks
You use Zero Thread Scheduler to schedule jobs. Example scheduler initialization code:
var properties = new NameValueCollection();
properties["quartz.scheduler.instanceId"] = "AUTO";
properties["quartz.threadPool.type"] = "Quartz.Simpl.ZeroSizeThreadPool, Quartz";
properties["quartz.jobStore.type"] = "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz";
properties["quartz.jobStore.driverDelegateType"] = "Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz";
properties["quartz.jobStore.useProperties"] = "true";
properties["quartz.jobStore.dataSource"] = "default";
properties["quartz.jobStore.tablePrefix"] = tablePrefix;
properties["quartz.jobStore.clustered"] = "false";
properties["quartz.dataSource.default.connectionString"] = connectionString;
properties["quartz.dataSource.default.provider"] = "SqlServer-20";
schedFactory = new StdSchedulerFactory(properties);
BaseScheduler = schedFactory.GetScheduler();
Example scheduling function:
protected ITrigger CreateSimpleTrigger(string tName, string tGroup, IJobDetail jd, DateTime startTimeUtc,
DateTime? endTimeUtc, int repeatCount, TimeSpan repeatInterval, Dictionary<string, string> dataMap,
string description = "")
{
if (BaseScheduler.GetTrigger(new TriggerKey(tName, tGroup)) != null) return null;
var st = TriggerBuilder.Create().
WithIdentity(tName, tGroup).
UsingJobData(new JobDataMap(dataMap)).
StartAt(startTimeUtc).
EndAt(endTimeUtc).
WithSimpleSchedule(x => x.WithInterval(repeatInterval).WithRepeatCount(repeatCount)).
WithDescription(description).
ForJob(jd).
Build();
return st;
}
Obviously, you'll need to provide all relevant fields in your UI and pass values from those fields into the function. Example screenshot of some of the required fields:
Your Windows Service will initialize a Multi Thread Scheduler in OnStart() method in a very similar fashion to the way that Zero Thread Scheduler was initialized above and that Multi Thread Scheduler will monitor all the triggers in your database and start your jobs as specified in those triggers. Quartz.net will do all the heavy lifting in that regard. Once you scheduled your jobs and triggers are in the database all you need to do is initialize that Multi Thread Scheduler, connect it to the database containing triggers and it will keep on firing those jobs and execute your code as long as the service is running.

How would I configure Effort Testing Tool to mock Entity Framework's DbContext withOut the actual SQL Server Database up and running?

Our team's application development involves using Effort Testing Tool to mock our Entity Framework's DbContext. However, it seems that Effort Testing Tool needs to be see the actual SQL Server Database that the application uses in order to mock our Entity Framework's DbContext which seems to going against proper Unit Testing principles.
The reason being that in order to unit test our application code by mocking anything related to Database connectivity ( for example Entity Framework's DbContext), we should Never need a Database to be up and running.
How would I configure Effort Testing Tool to mock Entity Framework's DbContext withOut the actual SQL Server Database up and running?
*
Update:
#gert-arnold We are using Entity Framework Model First approach to implement the back-end model and database.
The following excerpt is from the test code:
connection = Effort.EntityConnectionFactory.CreateTransient("name=NorthwindModel");
jsAudtMppngPrvdr = new BlahBlahAuditMappingProvider();
fctry = new BlahBlahDataContext(jsAudtMppngPrvdr, connection, false);
qryCtxt = new BlahBlahDataContext(connection, false);
audtCtxt = new BlahBlahAuditContext(connection, false);
mockedReptryCtxt = new BlahBlahDataContext(connection, false);
_repository = fctry.CreateRepository<Account>(mockedReptryCtxt, null);
_repositoryAccountRoleMaps = fctry.CreateRepository<AccountRoleMap>(null, _repository);
The "name=NorthwindModel" pertains to our edmx file which contains information about our Database tables
and their corresponding relationships.
If I remove the "name=NorthwindModel" by making the connection like the following line of code, I get an error stating that it expects an argument:
connection = Effort.EntityConnectionFactory.CreateTransient(); // throws error
Could you please explain how the aforementioned code should be rewritten?
You only need that connection string because Effort needs to know where the EDMX file is.
The EDMX file contains all information required for creating an inmemory store with an identical schema you have in your database. You have to specify a connection string only because I thought it would be convenient if the user didn't have to mess with EDMX paths.
If you check the implementation of the CreateTransient method you will see that it merely uses the connection string to get the metadata part of it.
public static EntityConnection CreateTransient(string entityConnectionString, IDataLoader dataLoader)
{
var metadata = GetEffortCompatibleMetadataWorkspace(ref entityConnectionString);
var connection = DbConnectionFactory.CreateTransient(dataLoader);
return CreateEntityConnection(metadata, connection);
}
private static MetadataWorkspace GetEffortCompatibleMetadataWorkspace(ref string entityConnectionString)
{
entityConnectionString = GetFullEntityConnectionString(entityConnectionString);
var connectionStringBuilder = new EntityConnectionStringBuilder(entityConnectionString);
return MetadataWorkspaceStore.GetMetadataWorkspace(
connectionStringBuilder.Metadata,
metadata => MetadataWorkspaceHelper.Rewrite(
metadata,
EffortProviderConfiguration.ProviderInvariantName,
EffortProviderManifestTokens.Version1));
}

Database connection from Java handler BIRT

I'm creating a rptlibrary to share with all the reports in my company.
The library has an oda datasource created and shared to all reports. We want to do some querys from ReportEventAdapter.initialize() to the database to get information from the database. I can acces the datasource in the library in this way:
ReportDesignHandle rdh = (ReportDesignHandle)reportContext.getReportRunnable().getDesignHandle();
DesignSessionImpl ds = rdh.getModule().getSession();
String rsf = ds.getResourceFolder( );
LibraryHandle libhan = ds.openLibrary(rsf + "/my.rptlibrary" ).handle( );
DataSourceHandle datasource = libhan.findDataSource("myDS");
But once I have the datasource, there's no way to get a connection to the database from the datasource. The only way to do this, is creating a classic JDBC connection to the database using the data from the datasource? Is there any way to use a more elegant method to connect to the database from the java handler? Like using pooling, reusing the connection, etc..
Thanks.
We can iterate over dataset values in a report script event, thus if a dataset is defined with a JNDI URL, queries can take advantage of a connection pool.
However it is quite complicated. There is a full example in this topic: the script defined in "getDefaultValueList" event of the report parameter can be moved anywhere in the report and then initialize a global variable. In particular we could move it to "initialize" event, or to "beforeFactory" event (in your case "beforeFactory" is probably what you want).

Resources