oracle adf difference between two dates - oracle-adf

I'm trying to do a Project in college.
i have problem in this code
Timestamp ts1=jbod1.timestampValue();
Timestamp ts2=jbod2.timestampValue();
i need to calculate the duration between to columns. When I add any employee it will calculate duration automatically.
i need help fast guys please
it shows this error
incompatible types java.sql.TimeStamp cannot be converted to oracle.jbo.domain.TimeStamp
public Number getDuration() {
oracle.jbo.domain.Date jbod1=getVacstartdate();
oracle.jbo.domain.Date jbod2 = getVacenddate();
oracle.jbo.domain.Number DURATION;
Timestamp ts1=jbod1.timestampValue();
Timestamp ts2=jbod2.timestampValue();
long ndays=((ts2.getTime()-ts1.getTime())/86400000)+1 ;
DURATION=new oracle.jbo.domain.Number(ndays);
System.out.println("Number of Days " +DURATION);
return DURATION;
}

Google is your friend here. Especially if you are in a hurry!
Read the docs for oracle.jbo.domain.Timestamp and you'll see the constructors there. Use these to create instances of jbo timestamp. You can't convert one object type to another by assignment. This post may help.

Related

Composite or FilterPredicate query on Ref'd entity

Here's what I have:
class A{
Ref<foo> b;
Ref<foo> c;
int code;
Date timestamp
}
The pseudo "where" clause of the SQL statement would like like this:
where b = object or (c = object and code = 1) order by timestamp
In plain English, give me all the records of A if b equals the specific object or if c equals the specified object when code equals 1. Order the result w/ timestamp.
Is the composite query part even possible w/ datastore (Objectify)? I really don't want to do two queries and merge the results, because I have to sort by timestamp.
Any help is appreciated.
P.S. I already tried
new FilterPredicate(b, EQUAL, object)
This didn't work, because the entity type is not a support type.
Thanks!
Pass a native datastore Key object to the FilterPredicate. The Google SDK Key, not the generic Objectify Key<?>.
Normally when filtering on properties, Objectify translates Ref<?> and Key<?> objects to native datastore keys for you. With the google-supplied FilterPredicate, that isn't an option. So you have to do the translation manually.
Objectify stores all Key<?> and Ref<?> fields and native datastore Keys, so you can freely interchange them (or even change the type of fields if you want).

Incrementing keys for a multi-user tool in google cloud datastore

I am building a tool using the Google Cloud Datastore Java API. The backend of this tool has a bunch of methods and APIs that we made which are hosted on the Google App Engine. The data we collect from the tool comes from a Chrome extension we built and using the above mentioned APIs we store our data in GCD. Everything works perfectly well in our implementation except for one thing, the Identifiers.
I created a method to store all our relevant information in several tables and while submitting I am creating each Entity with an Identifier which is the next number in ascending order from the previous entry in the table. The tool is being used by several people and the entries for that particular day are stored in correct order. However, everyday it seems that thee ID variable is reset and our table starts overwriting information as the ID starts from 1 again. It remains constant during that day but soon as the date changes, the ID starts from 1 again.
AtomicInteger Identifier = new AtomicInteger();
public void DataEntity(String EmpName, String Date, String Col1, String Col2)
{
id = Identifier.incrementAndGet();
Entity en = new Entity("DataTable", id);
task.setProperty("Employee Name", EmpName);
task.setProperty("Submit_Date", Date);
task.setProperty("Column1", Col1);
task.setProperty("Column2", Col2);
...
ds.put(en);
}
My guess is that at the end of the day all the methods are garbage collected. I should also note that our app is threadsafe, hence, data is not getting overwritten simultaneously. Only the next day when all the variables seem to have been reset and everything starts from 1. Any help will be much appreciated. Please let me know in case you have any questions. I'll be happy to provide more info.

TypeError: can't compare datetime.date to DateProperty

I am trying to query if a certain date belongs to a specific range of dates. Source code example:
billing_period_found = BillingPeriod.query(
ndb.AND(
transaction.date > BillingPeriod.start_date,
transaction.date < BillingPeriod.end_date)
).get()
Data definition:
class Transaction(ndb.Model):
date = ndb.DateProperty(required=False)
class BillingPeriod(ndb.Model):
start_date = ndb.DateProperty(required=False)
end_date = ndb.DateProperty(required=False)
Getting the following error:
TypeError: can't compare datetime.date to DateProperty
The message error does make sense because datetime is different from DateProperty. However, as you can see, the definition for transaction.date is not datetime, so I am not getting where this attempt to convert datetime to date is coming from. Anyways - If I figure out how to convert datetime to DateProperty, I guess it would fix the problem.
Any ideas on how to solve this?
Thanks!
The App Engine datastore does not allow queries with inequalities on multiple properties (not a limitation of ndb, but of the underlying datastore). Selecting date-range entities that contain a certain date is a typical example of tasks that this makes it impossible to achieve in a single query.
Check out Optimizing a inequality query in ndb over two properties for an example of this question, and, in the answer, one suggestion that might work: query for (in your case) all BillingPeriod entities with end_date greater than the desired date, perhaps with a projection to just get their key and start_date; then, select out of those only those with start_date less than the desired date, in your own application (if you only want one of them, then a next over the iterator will stop as soon as it finds one).
Edit: the issue above is problem #1 with this code; once solved, problem #2 arises -- as clearly listed at https://cloud.google.com/appengine/docs/python/ndb/queries, the property is ndb queries is always on the left of the comparison operator. So, one can't do date < BillingPeriod.end_date, as that would have the property on the right; rather, one does BillingPeriod.end_date > date.

How do I map TimeSpan with greater than 24 hours to SQL server Code First?

I am trying to map a TimeSpan Code First property to SQL server. Code First seems to be creating it as a Time(7) in SQL. However TimeSpan in .Net can handle longer periods than 24 hours and I need to store longer than 24 hour for event length. What is the best way to handle this with Code First.
As per my previous question on how to store TimeSpan in SQL I was advised to store it as seconds or ticks etc. In the end I didn't map the TimeSpan column as there is no equivalent in SQL server. I simply created a 2nd field which converted the TimeSpan to ticks and stored that in the DB. I then prevented storing the TimeSpan
public Int64 ValidityPeriodTicks { get; set; }
[NotMapped]
public TimeSpan ValidityPeriod
{
get { return TimeSpan.FromTicks(ValidityPeriodTicks); }
set { ValidityPeriodTicks = value.Ticks; }
}
If you wish to do this in EF Core it is a lot cleaner as you can use Value Conversions. In 2.1 you can use value conversions and TimeSpanToTicksConverter to map timespans to ticks in the database transparently. So certainly worth considering EF Core (assuming other features meet needs) - can use it in Framework 4.7 projects so don't need to switch to .Net Core.
As far as I know there is no equivalent data type in SQL Server for .NET's TimeSpan. The closest match is Time, but, as you pointed out, it only supports values up to 24 hours? http://msdn.microsoft.com/en-us/library/ms186724.aspx#DateandTimeDataTypes.
The following MSDN document describes this http://msdn.microsoft.com/en-us/library/bb386909.aspx. I'm assuming that since there is no solution listed there, it's not currently possible.
First of all, MVC has nothing to do with this issue. It is entirely related to EF Code First and SQL Server so it's a DAL matter.
One solution could be to provide a custom column type in your entity configuration, like this:
modelBuilder
.Entity<MyClass>()
.Property(c => c.MyTimeSpan)
.HasColumnType("whatever sql type you want to use");

key-value store for time series data?

I've been using SQL Server to store historical time series data for a couple hundred thousand objects, observed about 100 times per day. I'm finding that queries (give me all values for object XYZ between time t1 and time t2) are too slow (for my needs, slow is more then a second). I'm indexing by timestamp and object ID.
I've entertained the thought of using somethings a key-value store like MongoDB instead, but I'm not sure if this is an "appropriate" use of this sort of thing, and I couldn't find any mentions of using such a database for time series data. ideally, I'd be able to do the following queries:
retrieve all the data for object XYZ between time t1 and time t2
do the above, but return one date point per day (first, last, closed to time t...)
retrieve all data for all objects for a particular timestamp
the data should be ordered, and ideally it should be fast to write new data as well as update existing data.
it seems like my desire to query by object ID as well as by timestamp might necessitate having two copies of the database indexed in different ways to get optimal performance...anyone have any experience building a system like this, with a key-value store, or HDF5, or something else? or is this totally doable in SQL Server and I'm just not doing it right?
It sounds like MongoDB would be a very good fit. Updates and inserts are super fast, so you might want to create a document for every event, such as:
{
object: XYZ,
ts : new Date()
}
Then you can index the ts field and queries will also be fast. (By the way, you can create multiple indexes on a single database.)
How to do your three queries:
retrieve all the data for object XYZ
between time t1 and time t2
db.data.find({object : XYZ, ts : {$gt : t1, $lt : t2}})
do the above, but return one date
point per day (first, last, closed to
time t...)
// first
db.data.find({object : XYZ, ts : {$gt : new Date(/* start of day */)}}).sort({ts : 1}).limit(1)
// last
db.data.find({object : XYZ, ts : {$lt : new Date(/* end of day */)}}).sort({ts : -1}).limit(1)
For closest to some time, you'd probably need a custom JavaScript function, but it's doable.
retrieve all data for all objects for
a particular timestamp
db.data.find({ts : timestamp})
Feel free to ask on the user list if you have any questions, someone else might be able to think of an easier way of getting closest-to-a-time events.
This is why databases specific to time series data exist - relational databases simply aren't fast enough for large time series.
I've used Fame quite a lot at investment banks. It's very fast but I imagine very expensive. However if your application requires the speed it might be worth looking it.
There is an open source timeseries database under active development (.NET only for now) that I wrote. It can store massive amounts (terrabytes) of uniform data in a "binary flat file" fashion. All usage is stream-oriented (forward or reverse). We actively use it for the stock ticks storage and analysis at our company.
I am not sure this will be exactly what you need, but it will allow you to get the first two points - get values from t1 to t2 for any series (one series per file) or just take one data point.
https://code.google.com/p/timeseriesdb/
// Create a new file for MyStruct data.
// Use BinCompressedFile<,> for compressed storage of deltas
using (var file = new BinSeriesFile<UtcDateTime, MyStruct>("data.bts"))
{
file.UniqueIndexes = true; // enforces index uniqueness
file.InitializeNewFile(); // create file and write header
file.AppendData(data); // append data (stream of ArraySegment<>)
}
// Read needed data.
using (var file = (IEnumerableFeed<UtcDateTime, MyStrut>) BinaryFile.Open("data.bts", false))
{
// Enumerate one item at a time maxitum 10 items starting at 2011-1-1
// (can also get one segment at a time with StreamSegments)
foreach (var val in file.Stream(new UtcDateTime(2011,1,1), maxItemCount = 10)
Console.WriteLine(val);
}
I recently tried something similar in F#. I started with the 1 minute bar format for the symbol in question in a Space delimited file which has roughly 80,000 1 minute bar readings. The code to load and parse from disk was under 1ms. The code to calculate a 100 minute SMA for every period in the file was 530ms. I can pull any slice I want from the SMA sequence once calculated in under 1ms. I am just learning F# so there are probably ways to optimize. Note this was after multiple test runs so it was already in the windows Cache but even when loaded from disk it never adds more than 15ms to the load.
date,time,open,high,low,close,volume
01/03/2011,08:00:00,94.38,94.38,93.66,93.66,3800
To reduce the recalculation time I save the entire calculated indicator sequence to disk in a single file with \n delimiter and it generally takes less than 0.5ms to load and parse when in the windows file cache. Simple iteration across the full time series data to return the set of records inside a date range in a sub 3ms operation with a full year of 1 minute bars. I also keep the daily bars in a separate file which loads even faster because of the lower data volumes.
I use the .net4 System.Runtime.Caching layer to cache the serialized representation of the pre-calculated series and with a couple gig's of RAM dedicated to cache I get nearly a 100% cache hit rate so my access to any pre-computed indicator set for any symbol generally runs under 1ms.
Pulling any slice of data I want from the indicator is typically less than 1ms so advanced queries simply do not make sense. Using this strategy I could easily load 10 years of 1 minute bar in less than 20ms.
// Parse a \n delimited file into RAM then
// then split each line on space to into a
// array of tokens. Return the entire array
// as string[][]
let readSpaceDelimFile fname =
System.IO.File.ReadAllLines(fname)
|> Array.map (fun line -> line.Split [|' '|])
// Based on a two dimensional array
// pull out a single column for bar
// close and convert every value
// for every row to a float
// and return the array of floats.
let GetArrClose(tarr : string[][]) =
[| for aLine in tarr do
//printfn "aLine=%A" aLine
let closep = float(aLine.[5])
yield closep
|]
I use HDF5 as my time series repository. It has a number of effective and fast compression styles which can be mixed and matched. It can be used with a number of different programming languages.
I use boost::date_time for the timestamp field.
In the financial realm, I then create specific data structures for each of bars, ticks, trades, quotes, ...
I created a number of custom iterators and used standard template library features to be able to efficiently search for specific values or ranges of time-based records.

Resources