I have a database with a table called Articles.
The Table stores Articles and has a field CDTimeStamp.
The CDTimestamp field got altered like this, so that it always has the correct creation date:
ALTER TABLE [dbo].[Artikel] ADD CONSTRAINT [DF_Artikel_CDTimeStamp] DEFAULT (getdate()) FOR [CDTimeStamp]
GO
So if I try to add an article, I get an error.
The article is added like this:
public void AddArticle()
{
this.Open();
Article article = new Article();
article.Description = "";
article.ArticleNr = GetArticleNumber();
article.Barcode = GetBarcode(); //EAN
article.Branch = GetBranch(); // 3digit number
article.Company = GetCompany(); // 1 or 2
article.Preis = GetPrice();
article.PreisNew = GetNewPrice();
//article.CDTimeStamp = DateTime.Now;
_OutDataContext.Artikel.InsertOnSubmit(article);
try
{
this.Submit();
}
catch (Exception e)
{
throw;
}
this.Close();
}
The error I get is:
SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and
12/31/9999 11:59:59 PM.
If I uncomment //article.CDTimeStamp = DateTime.Now; a DateTime is created and inserted but the getdate() default value should be inserted and not the value I create in my program.
My question is:
Is there a configuration entry or something alike that enables the calling of the default function ? The database field may not be null.
P.S.
I wasn't quite sure how to call this question so please feel free to edit it if you know a more correct title.
I think ColumnAttribute.IsDbGenerated is what you are looking for.
In class mapping use something like:
class Article
{
[Column(..., IsDbGenerated = true)]
public DateTime CDTimeStamp { get; set; }
...
Related
I need to implement function to store some value with limit on updates once per week.
I'm implemented in following way:
class Example
{
//Stored in db
public int _value;
//Stored in db
public DateTime _updatedAt;
//Stored in db
public DateTime _canUpdateAfter;
//Constant in code
public TimeSpan _updateTimeout = TimeSpan.FromMinutes(1);
public void StoreValue1(int value)
{
if (DateTime.Now - _updatedAt < _updateTimeout)
{
return;
}
_value = value;
_updatedAt = DateTime.Now;
}
public void StoreValue2(int value)
{
if (_canUpdateAfter > DateTime.Now)
{
return;
}
_value = value;
_canUpdateAfter = DateTime.Now + _updateTimeout;
}
}
I have two ways of implementing it:
Store updated time in db and calculate if timeout is passed in .net code.
Store value when timeout expire in db and compare it with current in .net code.
Which to use and why?
Both solutions are valid.
The only difference between the ways is the time to decide to set the possibility of the next update.
With solution 1 you make the decision every on code evaluation, with others you force the decision on the past.
I prefer solution 1; is more flexible, and sustainable.
Keep in mind the case of your business change update frequency. With solution 1 are enough new code deploy or change one row of your hypothetical configuration table, whereas whit solution 2 you will need to update all rows of the table.
I'm trying to insert a new record using Entity Framework using the following code. Within the BuildRequest.Save() method in the BuildRequest class, when the 'Insert' occurs, and the db.Save() is done, a JobId is correctly generated for the BuildRequest (indicating that the Insert is being done correctly), however the record isn't added to the database.
When I check in SQL profiler, here's what the insert is trying to do:
exec sp_executesql N'INSERT [dbo].[BuildQueue]([ApplicationId], [BuildReason])
VALUES (#0, #1)
SELECT [JobId]
FROM [dbo].[BuildQueue]
WHERE ##ROWCOUNT > 0 AND [JobId] = scope_identity()',N'#0 int,#1 tinyint,#0=5819,#1=0
Here is my call to create a new 'job':
BuildRequest job = new BuildRequest(ApplicationId, Core.BuildReasons.NewApp);
job.Save()
which uses the following class:
public class BuildRequest
{
private Data.BuildQueue _buildRequest;
public BuildRequest(int applicationId, BuildReasons reason)
{
_buildRequest = new Data.BuildQueue();
ApplicationId = applicationId;
BuildReason = reason;
}
public int JobId
{
get { return _buildRequest.JobId; }
}
public int ApplicationId
{
get
{
return _buildRequest.ApplicationId;
}
set
{
_buildRequest.ApplicationId = value;
}
}
public BuildReasons BuildReason
{
get
{
return (BuildReasons)_buildRequest.BuildReason;
}
set
{
_buildRequest.BuildReason = (byte)value;
}
}
public int Save()
{
using (Core.Data.UnitOfWork db = new Data.UnitOfWork())
{
Data.BuildQueue buildRequest = db.BuildQueueRepository.Get(a => a.JobId == this.JobId).SingleOrDefault();
if (buildRequest == null)//current build request doesn't already exist
{
db.BuildQueueRepository.Insert(_buildRequest);
}
else // we're editing an existing build request
{
db.BuildQueueRepository.Update(_buildRequest);
}
db.Save();
// **At this point, _buildRequest has a JobId, but no record is added to the database
return JobId;
}
}
}
What I've checked so far:
the 'UnitOfWork' object I create ('db'), is connected to the correct database (a remote SQL server database)
the db.BuildQueueRepository.Insert(...) is correctly calling dbSet.Add(entity);
the db.Save() is correctly calling context.SaveChanges();
there's existing code elsewhere in my application that uses the same generic UnitOfWork repository, which appears almost identical to the code above, and it works perfectly
What could be happening? What could be the difference in the code that's working correctly?
Doh! There's a job that takes new records out of the queue if they're in a certain status. It turns out that this particular routine sets that status immediately and the record was being entered then immediately moved to a different table.
I want to add a button to my opportunity header record that is called Insert Products. This will send the opportunity ID to a visualforce page which will have a select file button and an insert button that will loop through the CSV and insert the records to the related opportunity.
This is for non technical users so using Data loader is not an option.
I got this working using standard apex class however hit a limit when i load over 1,000 records (which would happen regularly).
I need to convert this to a batch process however am not sure how to do this.
Any one able to point me in the right direction? I understand a batch should have a start, execute and finish. However i am not sure where i should split the csv and where to read and load?
I found this link which i could not work out how to translate into my requirements: http://developer.financialforce.com/customizations/importing-large-csv-files-via-batch-apex/
Here is the code i have for the standard apex class which works.
public class importOppLinesController {
public List<OpportunityLineItem> oLiObj {get;set;}
public String recOppId {
get;
// *** setter is NOT being called ***
set {
recOppId = value;
System.debug('value: '+value);
}
}
public Blob csvFileBody{get;set;}
public string csvAsString{get;set;}
public String[] csvFileLines{get;set;}
public List<OpportunityLineItem> oppLine{get;set;}
public importOppLinesController(){
csvFileLines = new String[]{};
oppLine = New List<OpportunityLineItem>();
}
public void importCSVFile(){
PricebookEntry pbeId;
String unitPrice = '';
try{
csvAsString = csvFileBody.toString();
csvFileLines = csvAsString.split('\n');
for(Integer i=1;i<csvFileLines.size();i++){
OpportunityLineItem oLiObj = new OpportunityLineItem() ;
string[] csvRecordData = csvFileLines[i].split(',');
String pbeCode = csvRecordData[0];
pbeId = [SELECT Id FROM PricebookEntry WHERE ProductCode = :pbeCode AND Pricebook2Id = 'xxxx HardCodedValue xxxx'][0];
oLiObj.PricebookEntryId = pbeId.Id;
oLiObj.Quantity = Decimal.valueOf(csvRecordData[1]) ;
unitPrice = String.valueOf(csvRecordData[2]);
oLiObj.UnitPrice = Decimal.valueOf(unitPrice);
oLiObj.OpportunityId = 'recOppId';;
insert (oLiObj);
}
}
catch (Exception e)
{
ApexPages.Message errorMessage = new ApexPages.Message(ApexPages.severity.ERROR, e + ' - ' + unitPrice);
ApexPages.addMessage(errorMessage);
}
}
}
First problem that I can sense is that the insert DML statement is inside FOR-loop. Can you put the new "oLiObj" into a List that is declared before the FOR-loop starts and then try inserting the list after the FOR-loop ?
It should bring some more sanity in your code.
Is there a way to specify that I want all of the DateTimes that OrmLite materializes to be set to UTC kind?
I store a lot of DateTimes in my database via stored procedures when a row is inserted:
insert [Comment] (
Body
, CreatedOn
) values (
#Body
, getutcdate()
);
When I retrieve the values via a select statement in ormlite, the datetimes come out in Unspecified kind (which is interpreted as the local timezone, I believe):
var comments = db.SqlList<Comment>("select * from [Comment] where ... ");
I would prefer not to set each DateTime object individually:
foreach (var comment in comments) {
comment.CreatedOn = DateTime.SpecifyKind(comment.CreatedOn, DateTimeKind.Utc);
}
I found this question, but I don't think it's quite what I'm asking for:
servicestack ormlite sqlite DateTime getting TimeZone adjustment on insert
Also found this pull request, but setting SqlServerOrmLiteDialectProvider.EnsureUtc(true) doesn't seem to do it either.
SqlServerOrmLiteDialectProvider.EnsureUtc(true) does work, there was something else going on with my test case that led me to believe that it didn't. Hopefully this will help someone else.
Here's some sample code:
model.cs
public class DateTimeTest {
[AutoIncrement]
public int Id { get; set; }
public DateTime CreatedOn { get; set; }
}
test.cs
var connectionString = "server=dblcl;database=flak;trusted_connection=true;";
var provider = new SqlServerOrmLiteDialectProvider();
provider.EnsureUtc(true);
var factory = new OrmLiteConnectionFactory(connectionString, provider);
var connection = factory.Open();
connection.CreateTable(true, typeof(DateTimeTest));
connection.ExecuteSql("insert DateTimeTest (CreatedOn) values (getutcdate())");
var results = connection.SqlList<DateTimeTest>("select * from DateTimeTest");
foreach(var result in results) {
Console.WriteLine("{0},{1},{2},{3},{4}", result.Id, result.CreatedOn, result.CreatedOn.Kind, result.CreatedOn.ToLocalTime(), result.CreatedOn.ToUniversalTime());
}
output
1,9/13/2013 5:19:12 PM,Utc,9/13/2013 10:19:12 AM,9/13/2013 5:19:12 PM
My application is using SQLServer and JPA2 in the backend. App makes use of a timestamp column (in the SQLServer sense, which is equivalent to row version see here) per entity to keep track of freshly modified entities. NB SQLServer stores this column as binary(8).
Each entity has a respective timestamp property, mapped as #Lob, which is the way to go for binary columns:
#Lob
#Column(columnDefinition="timestamp", insertable=false, updatable=false)
public byte[] getTimestamp() {
...
The server sends incremental updates to mobile clients along with the latest database timestamp. The mobile client will then pass the old timestamp back to the server on the next refresh request so that the server knows to return only fresh data. Here's what a typical query (in JPQL) looks like:
select v from Visit v where v.timestamp > :oldTimestamp
Please note that I'm using a byte array as a query parameter and it works fine when implemented in JPQL this way.
My problems begin when trying to do the same using the Criteria API:
private void getFreshVisits(byte[] oldVersion) {
EntityManager em = getEntityManager();
CriteriaQuery<Visit> cq = cb.createQuery(Visit.class);
Root<Visit> root = cq.from(Visit.class);
Predicate tsPred = cb.gt(root.get("timestamp").as(byte[].class), oldVersion); // compiler error
cq.where(tsPred);
...
}
The above will result in compiler error as it requires that the gt method used strictly with Number. One could instead use the greaterThan method which simply requires the params to be Comparable and that would result in yet another compiler error.
So to sum it up, my question is: how can I use the criteria api to add a greaterThan predicate for a byte[] property? Any help will be greatly appreciated.
PS. As to why I'm not using a regular DateTime last_modified column: because of concurrency and the way synchronization is implemented, this approach could result in lost updates. Microsoft's Sync Framework documentation recommends the former approach as well.
I know this was asked a couple of years back but just in case anyone else stumbles upon this.. In order to use a SQLServer rowver column within JPA you need to do a couple of things..
Create a type that will wrap the rowver/timestamp:
import com.fasterxml.jackson.annotation.JsonIgnore;
import javax.xml.bind.annotation.XmlTransient;
import java.io.Serializable;
import java.math.BigInteger;
import java.util.Arrays;
/**
* A RowVersion object
*/
public class RowVersion implements Serializable, Comparable<RowVersion> {
#XmlTransient
#JsonIgnore
private byte[] rowver;
public RowVersion() {
}
public RowVersion(byte[] internal) {
this.rowver = internal;
}
#XmlTransient
#JsonIgnore
public byte[] getRowver() {
return rowver;
}
public void setRowver(byte[] rowver) {
this.rowver = rowver;
}
#Override
public int compareTo(RowVersion o) {
return new BigInteger(1, rowver).compareTo(new BigInteger(1, o.getRowver()));
}
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
RowVersion that = (RowVersion) o;
return Arrays.equals(rowver, that.rowver);
}
#Override
public int hashCode() {
return Arrays.hashCode(rowver);
}
}
The key here is that it implement Comparable if you want to use it in calculations (which you definitely do)..
Next create a AttributeConverter that will move from a byte[] to the class you just made:
import javax.persistence.AttributeConverter;
import javax.persistence.Converter;
/**
* JPA converter for the RowVersion type
*/
#Converter
public class RowVersionTypeConverter implements AttributeConverter<RowVersion, byte[]> {
#Override
public byte[] convertToDatabaseColumn(RowVersion attribute) {
return attribute != null ? attribute.getRowver() : null;
}
#Override
public RowVersion convertToEntityAttribute(byte[] dbData) {
return new RowVersion(dbData);
}
}
Now let's apply this RowVersion attribute/type to a real world scenario. Let's say you wanted to find all Programs that have changed on or before some point in time.
One straightforward way to solve this would be to use a DateTime field in the object and timestamp column within db. Then you would use 'where lastUpdatedDate <= :date'.
Suppose that you don't have that timestamp column or there's no guarantee that it will be updated properly when changes are made; or let's say your shop loves SQLServer and wants to use rowver instead.
What to do? There are two issues to solve.. one how to generate a rowver and two is how to use the generated rowver to find Programs.
Since the database generates the rowver, you can either ask the db for the 'current max rowver' (a custom sql server thing) or you can simply save an object that has a RowVersion attribute and then use that object's generated RowVersion as the boundary for the query to find the Programs changed after that time. The latter solution is more portable is what the solution is below.
The SyncPoint class snippet below is the object that is used as a 'point in time' kind of deal. So once a SyncPoint is saved, the RowVersion attached to it is the db version at the time it was saved.
Here is the SyncPoint snippet. Notice the annotation to specify the custom converter (don't forget to make the column insertable = false, updateable = false):
/**
* A sample super class that uses RowVersion
*/
#MappedSuperclass
public abstract class SyncPoint {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
// type is rowver for SQLServer, blob(8) for postgresql and h2
#Column(name = "current_database_version", insertable = false, updatable = false)
#Convert(converter = RowVersionTypeConverter.class)
private RowVersion currentDatabaseVersion;
#Column(name = "created_date_utc", columnDefinition = "timestamp", nullable = false)
private DateTime createdDate;
...
Also (for this example) here is the Program object we want to find:
#Entity
#Table(name = "program_table")
public class Program {
#Id
private Integer id;
private boolean active;
// type is rowver for SQLServer, blob(8) for postgresql and h2
#Column(name = "rowver", insertable = false, updatable = false)
#Convert(converter = RowVersionTypeConverter.class)
private RowVersion currentDatabaseVersion;
#Column(name = "last_chng_dt")
private DateTime lastUpdatedDate;
...
Now you can use these fields within your JPA criteria queries just like anything else.. here is a snippet that we used inside a spring-data Specifications class:
/**
* Find Programs changed after a synchronization point
*
* #param filter that has the changedAfter sync point
* #return a specification or null
*/
public Specification<Program> changedBeforeOrEqualTo(final ProgramSearchFilter filter) {
return new Specification<Program>() {
#Override
public Predicate toPredicate(Root<Program> root, CriteriaQuery<?> query, CriteriaBuilder cb) {
if (filter != null && filter.changedAfter() != null) {
// load the SyncPoint from the db to get the rowver column populated
SyncPoint fromDb = synchronizationPersistence.reload(filter.changedBeforeOrEqualTo());
if (fromDb != null) {
// real sync point made by database
if (fromDb.getCurrentDatabaseVersion() != null) {
// use binary version
return cb.lessThanOrEqualTo(root.get(Program_.currentDatabaseVersion),
fromDb.getCurrentDatabaseVersion());
} else if (fromDb.getCreatedDate() != null) {
// use timestamp instead of binary version cause db doesn't make one
return cb.lessThanOrEqualTo(root.get(Program_.lastUpdatedDate),
fromDb.getCreatedDate());
}
}
}
return null;
}
};
}
The specification above works with both the binary current database version or a timestamp.. this way I could test my stuff and all the upstream code on a database other than SQLServer.
That's it really: a) type to wrap the byte[] b) JPA converter c) use attribute in query.