Problem to properly connect external realm database file in android kotlin project - database

I want to connect an external realm database to my Android project. Realm is already set up in build.gradle. I copied test database file: "realmdata.realm" into "raw" folder in "res".
Running the project gives me the error:
Caused by: io.realm.exceptions.RealmFileException: Could not resolve the path to the asset file: realmdata.realm Kind: ACCESS_ERROR.
...
d.androidrealmtestapp.MainActivity.onCreate(MainActivity.kt:40)
...
which corresponds to code line:
realm = Realm.getInstance(c)
No matter if I change filename or position in "res" directory the output is the same. After printing RealmConfiguration the output is: "realmFileName : default.realm" Why "default.realm" since I gave the asset file name: "realmdata.realm"? What am I doing wrong? So my question is how to properly connect an external realm file to the project? I am a beginner in kotlin and realm.
import android.support.v7.app.AppCompatActivity
import android.os.Bundle
import android.support.v7.widget.LinearLayoutManager
import android.support.v7.widget.RecyclerView
import io.realm.Realm
import io.realm.RealmConfiguration
import io.realm.annotations.RealmModule
class MainActivity : AppCompatActivity() {
private lateinit var mainRecycler : RecyclerView
lateinit var text: String
private lateinit var realm : Realm
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
println("--------------------------------------------- ")
print(application.assets.toString())
Realm.init(this)
var c = RealmConfiguration.Builder()
.assetFile("realmdata.realm")
.modules(MyModule())
.readOnly()
.build()
println("--------------------------------------------- ")
println(" c configuration builder file:")
println(c)
println("--------------------------------------------- ")
Realm.setDefaultConfiguration(c)
realm = Realm.getInstance(c)
realm.beginTransaction()
print ("realm ...")
realm.commitTransaction()
mainRecycler = findViewById(R.id.main_recycler)
mainRecycler.layoutManager = LinearLayoutManager(this)
mainRecycler.adapter = MainAdapter()
}
#RealmModule(classes = arrayOf(RealmModel::class ))
private class MyModule {}

I copied test database file: "realmdata.realm" into "raw" folder in
"res"
You need to copy your database to assets folder
To create assets folder folow this.

Related

Codename One "out of memory" when using Object-C native interface (HEIC to JPEG conversion)

Since I'm implementing a custom gallery for Android and iOS, I have to access directly to the gallery files stored in the FileSystemStorage through native interfaces.
The basic idea is to retrieve the file list through a native interface, and then make a cross-platform GUI in Codename One. This works on Android, I had to make the thumbs generation (in the Codename One side, not in the native interface side) as fast as possible and the overall result is quite acceptable.
On iOS, I have an additional issue, that is the HEIC image file format, that needs to be converted in JPEG to become usable in Codename One. Basically, I get the file list through the code in this question (I'm waiting for an answer...), then I have to convert each HEIC file to a temporary JPEG file, but my HEICtoJPEG native interface makes the app crashing after few images with an "out of memory" Xcode message...
I suspect that the problematic code is the following, maybe the UIImage* image and/or the NSData* mediaData are never released:
#import "myapp_utilities_HEICtoJPEGNativeImpl.h"
#implementation myapp_utilities_HEICtoJPEGNativeImpl
-(NSData*)heicToJpeg:(NSData*)param{
UIImage* image = [UIImage imageWithData:param];
NSData* mediaData = UIImageJPEGRepresentation(image, 0.9);
return mediaData;
}
-(BOOL)isSupported{
return YES;
}
#end
This is the Java native interface:
import com.codename1.system.NativeInterface;
/**
* #deprecated
*/
public interface HEICtoJPEGNative extends NativeInterface {
public byte[] heicToJpeg(byte[] heicInput);
}
and this the Java public API:
import com.codename1.io.FileSystemStorage;
import com.codename1.io.Util;
import com.codename1.system.NativeLookup;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
public class HEICtoJPEG {
private static HEICtoJPEGNative nativeInterface = NativeLookup.create(HEICtoJPEGNative.class);
/**
* Public API to convert an HEIC file to a new JPEG file (placed in /heic)
* #param heicFile in the FileSystemStorage
* #return a new file (with unique name)
*/
public static String convertToJPEG(String heicFile) throws IOException {
if (nativeInterface != null && nativeInterface.isSupported()) {
// It ensures that the directory exists.
FileSystemStorage fss = FileSystemStorage.getInstance();
String heicDir = fss.getAppHomePath() + "/heic";
if (!fss.isDirectory(heicDir)) {
fss.mkdir(heicDir);
}
ByteArrayOutputStream outHeic = new ByteArrayOutputStream();
InputStream inHeic = fss.openInputStream(heicFile);
Util.copy(inHeic, outHeic);
byte[] heicData = outHeic.toByteArray();
byte[] jpegData = nativeInterface.heicToJpeg(heicData);
String jpegFile = heicDir + "/" + DeviceUtilities.getUniqueId() + ".jpg";
OutputStream outJpeg = fss.openOutputStream(jpegFile);
ByteArrayInputStream inJpeg = new ByteArrayInputStream(jpegData);
Util.copy(inJpeg, outJpeg);
return jpegFile;
} else {
return null;
}
}
}
Since the Android counterpart works, I hope that the rest of my custom gallery code is fine and that this out-of-memory issue is inside code I posted here.
I hope you can indicate me a working solution. Thank you
There was a memory leak in the way that the iOS port invoked native interface methods which received or returned primitive arrays (byte[], int[], etc..).
I have just committed a fix for this (native interface invocations are now wrapped in an autorelease pool) which will be available on the build server next Friday (October 9, 2020).
EDIT: (Friday October 2, 2020)
This fix has been deployed to the build server already so it you should be able to build it again immediately and see if it fixes your issue.

Why does BigQuery fail to parse an Avro file that is accepted by avro-tools?

I'm trying to export google cloud datastore data to Avro files in google cloud storage and then load those files into BigQuery.
Firstly, I know that Big Query loads datastore backups. This has several disadvantages that I'd like to avoid:
Backup tool is closed source
Backup tool format is undocumented.
Backup tool format cannot be read directly by Dataflow
Backup scheduling for appengine is in (apparently perpetual) alpha.
It is possible to implement your own backup handler in appengine, but it is fire and forget. You won't know when exactly the backup has finished or what the file name will be.
With the motivation clarified for this experiment here is my Dataflow Pipeline to export the data to avro format:
package com.example.dataflow;
import com.google.api.services.datastore.DatastoreV1;
import com.google.api.services.datastore.DatastoreV1.Entity;
import com.google.cloud.dataflow.sdk.Pipeline;
import com.google.cloud.dataflow.sdk.coders.AvroCoder;
import com.google.cloud.dataflow.sdk.io.AvroIO;
import com.google.cloud.dataflow.sdk.io.DatastoreIO;
import com.google.cloud.dataflow.sdk.io.Read;
import com.google.cloud.dataflow.sdk.options.DataflowPipelineOptions;
import com.google.cloud.dataflow.sdk.options.PipelineOptions;
import com.google.cloud.dataflow.sdk.options.PipelineOptionsFactory;
import com.google.cloud.dataflow.sdk.transforms.DoFn;
import com.google.cloud.dataflow.sdk.transforms.ParDo;
import org.apache.avro.Schema;
import org.apache.avro.file.DataFileReader;
import org.apache.avro.file.DataFileWriter;
import org.apache.avro.file.SeekableByteArrayInput;
import org.apache.avro.generic.GenericDatumReader;
import org.apache.avro.generic.GenericRecord;
import org.apache.avro.io.DatumReader;
import org.apache.avro.protobuf.ProtobufData;
import org.apache.avro.protobuf.ProtobufDatumWriter;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.ByteArrayOutputStream;
public class GCDSEntitiesToAvroSSCCEPipeline {
private static final String GCS_TARGET_URI = "gs://myBucket/datastore/dummy";
private static final String ENTITY_KIND = "Dummy";
private static Schema getSchema() {
return ProtobufData.get().getSchema(Entity.class);
}
private static final Logger LOG = LoggerFactory.getLogger(GCDSEntitiesToAvroSSCCEPipeline.class);
public static void main(String[] args) {
PipelineOptions options = PipelineOptionsFactory.fromArgs(args).withValidation().create();
Pipeline p = Pipeline.create(options);
DatastoreV1.Query.Builder q = DatastoreV1.Query.newBuilder()
.addKind(DatastoreV1.KindExpression.newBuilder().setName(ENTITY_KIND));
p.apply(Read.named("DatastoreQuery").from(DatastoreIO.source()
.withDataset(options.as(DataflowPipelineOptions.class).getProject())
.withQuery(q.build())))
.apply(ParDo.named("ProtoBufToAvro").of(new ProtoBufToAvro()))
.setCoder(AvroCoder.of(getSchema()))
.apply(AvroIO.Write.named("WriteToAvro")
.to(GCS_TARGET_URI)
.withSchema(getSchema())
.withSuffix(".avro"));
p.run();
}
private static class ProtoBufToAvro extends DoFn<Entity, GenericRecord> {
private static final long serialVersionUID = 1L;
#Override
public void processElement(ProcessContext c) throws Exception {
Schema schema = getSchema();
ProtobufDatumWriter<Entity> pbWriter = new ProtobufDatumWriter<>(Entity.class);
DataFileWriter<Entity> dataFileWriter = new DataFileWriter<>(pbWriter);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
dataFileWriter.create(schema, bos);
dataFileWriter.append(c.element());
dataFileWriter.close();
DatumReader<GenericRecord> datumReader = new GenericDatumReader<>(schema);
DataFileReader<GenericRecord> dataFileReader = new DataFileReader<>(
new SeekableByteArrayInput(bos.toByteArray()), datumReader);
c.output(dataFileReader.next());
}
}
}
The pipeline runs fine, however when I try to load the resultant Avro file into big query I get the following error:
bq load --project_id=roodev001 --source_format=AVRO dummy.dummy_1 gs://roodev001.appspot.com/datastore/dummy-00000-of-00001.avro
Waiting on bqjob_r5c9b81a49572a53b_00000154951eb523_1 ... (0s) Current status: DONE
BigQuery error in load operation: Error processing job 'roodev001:bqjob_r5c9b81a49572a53b_00000154951eb523_1': The Apache Avro library failed to parse file
gs://roodev001.appspot.com/datastore/dummy-00000-of-00001.avro.
However if I load the resultant avro file with avro tool, everything is just fine:
avro-tools tojson datastore-dummy-00000-of-00001.avro | head
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
{"key":{"com.google.api.services.datastore.DatastoreV1$.Key":{"partition_id":{"com.google.api.services.datastore.DatastoreV1$.PartitionId":{"dataset_id":"s~roodev001","namespace":""}},"path_element":[{"kind":"Dummy","id":4503905778008064,"name":""}]}},"property":[{"name":"number","value":{"boolean_value":false,"integer_value":879,"double_value":0.0,"timestamp_microseconds_value":0,"key_value":null,"blob_key_value":"","string_value":"","blob_value":"","entity_value":null,"list_value":[],"meaning":0,"indexed":true}}]}
...
I used this code to populate the datastore with dummy data before running the Dataflow pipeline:
package com.example.datastore;
import com.google.gcloud.AuthCredentials;
import com.google.gcloud.datastore.*;
import java.io.IOException;
public static void main(String[] args) throws IOException {
Datastore datastore = DatastoreOptions.builder()
.projectId("myProjectId")
.authCredentials(AuthCredentials.createApplicationDefaults())
.build().service();
KeyFactory dummyKeyFactory = datastore.newKeyFactory().kind("Dummy");
Batch batch = datastore.newBatch();
int batchCount = 0;
for (int i = 0; i < 4000; i++){
IncompleteKey key = dummyKeyFactory.newKey();
System.out.println("adding entity " + i);
batch.add(Entity.builder(key).set("number", i).build());
batchCount++;
if (batchCount > 99) {
batch.submit();
batch = datastore.newBatch();
batchCount = 0;
}
}
System.out.println("done");
}
So why is BigQuery rejecting my avro files?
BigQuery uses the C++ Avro library, and apparently it doesn't like the "$" in the namespace. Here's the error message:
Invalid namespace: com.google.api.services.datastore.DatastoreV1$
We're working on getting these Avro error messages out to the end user.

Unable to get google bigquery and google app engine to work

Found the answer to my question: For those having the same problem.
ANSWER: When working with HTTP servlets i needed to have the jars within the WEB-INF/lib directory. Else i could just keep them under the java build path (libraries). Thus in eclispe, right click on lib, then Add Google API's and the select BigQuery.
I am testing out google app engine with big query.
I am able to run big query fine in eclipse when I run it as an app, however when i run it as an HttpServlet i keep getting the following error!
java.lang.NoClassDefFoundError: com/google/api/client/json/JsonFactory
Below is the exact code I am using.
package com.hw3.test;
import com.google.api.client.googleapis.auth.oauth2.GoogleCredential;
import com.google.api.client.http.HttpTransport;
import com.google.api.client.http.javanet.NetHttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.bigquery.Bigquery;
import com.google.api.services.bigquery.BigqueryScopes;
import com.google.api.services.bigquery.model.GetQueryResultsResponse;
import com.google.api.services.bigquery.model.QueryRequest;
import com.google.api.services.bigquery.model.QueryResponse;
import com.google.api.services.bigquery.model.TableCell;
import com.google.api.services.bigquery.model.TableRow;
import java.io.IOException;
import javax.servlet.http.*;
import java.util.List;
import java.util.Scanner;
#SuppressWarnings("serial")
public class HelloWord3Servlet extends HttpServlet {
public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
Bigquery bigquery = createAuthorizedClient(); //If i comment this out i will get the text below, else i get the error from the title.
resp.setContentType("text/plain");
resp.getWriter().println("\nQuery Results:\n------------\n");
}
private static List<TableRow> executeQuery(String querySql, Bigquery bigquery, String projectId)
throws IOException {
QueryResponse query = bigquery.jobs().query(projectId, new QueryRequest().setQuery(querySql)).execute();
// Execute it
GetQueryResultsResponse queryResult = bigquery.jobs()
.getQueryResults(query.getJobReference().getProjectId(), query.getJobReference().getJobId()).execute();
return queryResult.getRows();
}
public static Bigquery createAuthorizedClient() throws IOException {
// Create the credential
HttpTransport transport = new NetHttpTransport();
JsonFactory jsonFactory = new JacksonFactory();
GoogleCredential credential = GoogleCredential.getApplicationDefault(transport, jsonFactory);
// Depending on the environment that provides the default credentials
// (e.g. Compute Engine, App
// Engine), the credentials may require us to specify the scopes we need
// explicitly.
// Check for this case, and inject the Bigquery scope if required.
if (credential.createScopedRequired()) {
credential = credential.createScoped(BigqueryScopes.all());
}
return new Bigquery.Builder(transport, jsonFactory, credential).setApplicationName("Bigquery Samples").build();
}
public static void main(String[] args) throws IOException {
Scanner sc;
if (args.length == 0) {
// Prompt the user to enter the id of the project to run the queries
// under
System.out.print("Enter the project ID: ");
sc = new Scanner(System.in);
} else {
sc = new Scanner(args[0]);
}
String projectId = sc.nextLine();
// Create a new Bigquery client authorized via Application Default
// Credentials.
Bigquery bigquery = createAuthorizedClient();
List<TableRow> rows = executeQuery(
"SELECT TOP(corpus, 10) as title, COUNT(*) as unique_words " + "FROM [publicdata:samples.shakespeare]",
bigquery, projectId);
printResults(rows);
}
private static void printResults(List<TableRow> rows) {
System.out.print("\nQuery Results:\n------------\n");
for (TableRow row : rows) {
for (TableCell field : row.getF()) {
System.out.printf("%-50s", field.getV());
}
System.out.println();
}
}
}
I got this code directly from the google website although i did modify it slightly so that i can test out app engine. However it will not work when using app engine.
Any help is greatly appreciated!
It sounds like dependencies aren't configured correctly when you are running as an HttpServlet. How do you tell your app which dependencies to use? What version are you trying to load? Is that version available in Google App Engine?
Note that the specific version of the jackson libraries you require change depending on what environment you are running in. See https://developers.google.com/api-client-library/java/google-http-java-client/setup for a list of dependencies you need in various environments.
ANSWER: When working with HTTP servlets i needed to have the jars within the WEB-INF/lib directory. Else i could just keep them under the java build path (libraries). Thus in eclispe, right click on lib, then Add Google API's and the select BigQuery.

Swift and SQLite

I am following a tutorial on using SQLite on Swift, everything is working fine (the code is below). I am confusing about the database itself. I would be grateful if an expert could clarify the following questions to me: 1- I want the database to be part of the app to be used offline I mean the user will download the app with the database inside, where should I copy the database itself? 2- I tried to copy the database to supporting files folder but when the app runs in my iphone the database is not copied to my iphone. Instead of I have to build a new database in the iphone3- The database will be large so I don't want to build it in the app (I am planning to copy data from excel using Firefox SQLite Manager, alternatives for copying are very welcome).
Any hints are more than welcome. thanks in advance.
let filemgr = NSFileManager.defaultManager()
let dirPaths = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)
let docsDir = dirPaths [0] as! String
databasePath = docsDir.stringByAppendingPathComponent("contacts.db")
if !filemgr.fileExistsAtPath(databasePath as String) {
let contactDB = FMDatabase (path: databasePath as String)
if contactDB == nil {println("Error: \(contactDB.lastErrorMessage())")
}
if contactDB.open() {
let sql_stmt = "CREATE TABLE IF NOT EXISTS CONTACTS (ID INTEGER PRIMARY KEY AUTOINCREMENT, NAME TEXT, ADDRESS TEXT, PHONE TEXT)"
if !contactDB.executeStatements(sql_stmt){
println("Error: \(contactDB.lastErrorMessage())")
}
contactDB.close()
}else {
println("Error: \(contactDB.lastErrorMessage())")
}
}
}
create your own database and put the database into your xcode project(just drag the database into xcode project) and use the following code to copy the database into your app
override func viewDidLoad() {
super.viewDidLoad()
var fileManager = NSFileManager.defaultManager()
var Sourcepath = NSBundle.mainBundle().resourcePath?.stringByAppendingPathComponent("DataBase.db");
let docsPath = NSSearchPathForDirectoriesInDomains(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true)[0] as! String
let databaseStr = "DataBase.db"
let dbPath = docsPath.stringByAppendingPathComponent(databaseStr)
//check for database existance if not exsit than copy the database destination path
if(fileManager .fileExistsAtPath(dbPath) == false) {
var error:NSError?
fileManager.copyItemAtPath(Sourcepath!, toPath: dbPath, error: &error)
}
}
once database is copy into application bundle
1) Add libsqlite3.dylib to your project
to add go to application target ->Build Phases ->Link Binary With Libraries and click +
2) Import #import <sqlite3.h> this line into objective-C bridge header file
to create bridge header refer this How to use Objective-C Classes in Swift
know you will be able to access all the sqlite method in you swift code
I think you can not insert a database in Swift, if I recall it is proprietary so you can not edit outside of the app....

Grails: copy a file from grails-app to web-app directory during bootstrap

I have some files in a specific folder into grails-app directory. During bootstrap, I would like to copy one of those files (let's say the latest, doesn't matter) and copy it into the web-app folder, to make it accessible to the grails application.
How would you do that? I wrote something like this:
class BootStrap {
GrailsApplication grailsApplication
def init = { servletContext ->
// ...
def source = new File('grails-app/myFolder/my-file-'+ grailsApplication.metadata.getApplicationVersion() +'.txt')
def destination = new File('web-app/my-current-file.txt')
source?.withInputStream { is ->
destination << is
}
// ...
}
}
But I have difficulties to identify the right path for source and destination files (getting a FileNotFoundException). I already double checked folder and files names, my problem is the starting point for relative paths.
Is the bootstrap a good place to perform this kind of operation?
As always, thanks in advance.
I made it with the Bootstrap (please read the entire answer):
class BootStrap {
GrailsApplication grailsApplication
def init = { servletContext ->
def applicationContext = grailsApplication.mainContext
String basePath = applicationContext.getResource("/").getFile().toString()
File source = new File("${basePath}/../grails-app/myFolder/" + grailsApplication.metadata.getApplicationVersion() +'.txt')
File destination = new File("${basePath}/my-current-file.txt")
source?.withInputStream {
destination << it
}
}
}
But, as suggested by Muein Muzamil, the best approach is with events.
Here's his solution applied to my example:
eventCompileEnd = {
metadata = grails.util.Metadata.getCurrent()
appVersion = metadata."app.version"
ant.copy(file: "${basedir}/grails-app/myFolder/${appVersion}.txt", tofile: "${basedir}/web-app/my-current-file.txt")
}
How about hooking onto Grails events. Currently as part of the project compilation step, I am copying my external configuration file from conf folder to class path. So you can do something similar to that.
This is what I have in _Events.groovy file: I guess you can do something similar to this.
eventCompileEnd = {
ant.copy(todir:classesDirPath) {
fileset(file:"${basedir}/grails-app/conf/override.properties")
}}

Resources