App engine 1.4.0 urlfetch() data over 1M - google-app-engine

the version of my app engine is 1.4.0.the data dem.bil is under the /war/dem.bil directory.and these are my codes to fetch the data dem.bil that is 3M:
try{
URLConnection a = url.openConnection();
InputStream b = a.getInputStream();
int len = a.getContentLength();
if (len < 0) {
return null;
}
//System.out.println("Total: "+len);
byte[] c = new byte[len];
b.read(c,0,len);
return c;
}catch (Exception e) {
e.printStackTrace();
return null;
}
}
i know the version 1.4.0 increased the URLFetch response limit to 32MB,but when it goes to InputStream b = a.getInputStream(); it's debug is "com.google.appengine.api.urlfetch.ResponseTooLargeException: The response from url localhost:8888/dem.bil was too large. ".so can someone can tell me why?or some wrong with my codes?

Since the 1.4.0 version was released, the limit was raised to 32mb.
See: http://googleappengine.blogspot.com/2010/12/happy-holidays-from-app-engine-team-140.html

Related

How to protect my SQLite db by intentionally corrupting it, then fix it through code?

This is my first app on Android with Java and SQLite.
ISSUE:
I have a local SQLIte db on my app. I was very surprised to see how easy it is to get access to the db once you have installed the app (no need to be a programmer nor a hacker).
I tried adding SQLCipher to my app but it only worked for newer Android versions 11 & 12 and didn't work for Android 9 for example and it did make my app's size much bigger.
After researching more I found a better solution for my case which doesn"t involve crypting the db with SQLCipher but rather it consists of corrupting the first bytes of the db file then after each launch of the app the code will decorrupt the file and use the fixed file instead. This insures that anyone who decompiles the apk will only get access to a corrupt db file and will have to put more effort to fix it which is my goal.
I came across this solution in a reply [here][1] but I don't know how to implement it as I am new to Android and SQLite programming. Any help is much appreciated on how to actually do it.
These are the steps as mentioned by the user: farhad.kargaran which need more explanation as I don't get how to do it:
1- corrupt the db file (convert it to byte array and change some values)
2- copy it in asset folder
3- in first run fix corrupted file from asset and copy it in database
folder.
Change first 200 byte values like this:
int index = 0;
for(int i=0;i<100;i++)
{
byte tmp = b[index];
b[index] = b[index + 1];
b[index + 1] = tmp;
index += 2;
}
As only the first 200 bytes were replaced, the same code is used for fixing first 200 byte values.
Here is my code for the SQLiteOpenHelper if needed:
public class DatabaseHelper extends SQLiteOpenHelper {
private static final String TAG = DatabaseHelper.class.getSimpleName();
public static String DB_PATH;
public static String DB_NAME;
public SQLiteDatabase database;
public final Context context;
public SQLiteDatabase getDb() {
return database;
}
public DatabaseHelper(Context context, String databaseName, int db_version) {
super(context, databaseName, null, db_version);
this.context = context;
DB_PATH = getReadableDatabase().getPath();
DB_NAME = databaseName;
openDataBase();
// prepare if need to upgrade
int cur_version = database.getVersion();
if (cur_version == 0) database.setVersion(1);
Log.d(TAG, "DB version : " + db_version);
if (cur_version < db_version) {
try {
copyDataBase();
Log.d(TAG, "Upgrade DB from v." + cur_version + " to v." + db_version);
database.setVersion(db_version);
} catch (IOException e) {
Log.d(TAG, "Upgrade error");
throw new Error("Error upgrade database!");
}
}
}
public void createDataBase() {
boolean dbExist = checkDataBase();
if (!dbExist) {
this.getReadableDatabase();
this.close();
try {
copyDataBase();
} catch (IOException e) {
Log.e(TAG, "Copying error");
throw new Error("Error copying database!");
}
} else {
Log.i(this.getClass().toString(), "Database already exists");
}
}
private boolean checkDataBase() {
SQLiteDatabase checkDb = null;
try {
String path = DB_PATH + DB_NAME;
checkDb = SQLiteDatabase.openDatabase(path, null, SQLiteDatabase.OPEN_READONLY);
} catch (SQLException e) {
Log.e(TAG, "Error while checking db");
}
if (checkDb != null) {
checkDb.close();
}
return checkDb != null;
}
private void copyDataBase() throws IOException {
InputStream externalDbStream = context.getAssets().open(DB_NAME);
String outFileName = DB_PATH + DB_NAME;
OutputStream localDbStream = new FileOutputStream(outFileName);
byte[] buffer = new byte[1024];
int bytesRead;
while ((bytesRead = externalDbStream.read(buffer)) > 0) {
localDbStream.write(buffer, 0, bytesRead);
}
localDbStream.close();
externalDbStream.close();
}
public SQLiteDatabase openDataBase() throws SQLException {
String path = DB_PATH + DB_NAME;
if (database == null) {
createDataBase();
database = SQLiteDatabase.openDatabase(path, null, SQLiteDatabase.OPEN_READWRITE);
}
return database;
}
#Override
public synchronized void close() {
if (database != null) {
database.close();
}
super.close();
}
Much appreciated.
[1]: https://stackoverflow.com/a/63637685/18684673
As part of the copyDatabase, correct and then write the corrupted data, then copy the rest.
Could be done various ways
e.g.
long buffersRead = 0; //<<<<< ADDED for detecting first buffer
byte[] buffer = new byte[1024];
int bytesRead;
while ((bytesRead = externalDbStream.read(buffer)) > 0) {
if (bufferesRead++ < 1) {
//correct the first 200 bytes here before writing ....
}
localDbStream.write(buffer, 0, bytesRead);
}

GAE, Local datastore does not create

I have no idea in what extend GAE is not easy to understand :(
My servlet manipulate a json string and then I'm trying to store it in datastore.
When I run the application I'm getting this output:
Jan 27, 2014 6:59:04 PM com.google.appengine.api.datastore.dev.LocalDatastoreService load
INFO: The backing store, D:\Android\IntelliJ IDEA\workspace\EyeBall\AppEngine\out\artifacts\AppEngine_war_exploded\WEB-INF\appengine-generated\local_db.bin, does not exist. It will be created.
1
2
3
4
5
7
***
***
***
***
***
8
9
Although it's mentioned that local_db.bin will be created but when I navigate to that directory the file is not there. Also, when I open http://localhost:8080/_ah/admin/datastore in browser nothing displays in Entity Kind drop down list.
So wtf happene to local_db.bin? Why it doesn't generates?
any suggestion would be appreciated. thanks.
==================
UPDATE:
I added my code based on request.
private static final String NO_DEVICE_ID = "FFFF0000";
private static final String SAMPLE_JSON = "{\"history\":[{\"date\":null,\"info\":null,\"title\":\"Maybank2u.com\",\"url\":\"https://www.maybank2u.com.my/mbb/Mobile/info.do\",\"visits\":14},{\"date\":null,\"info\":null,\"title\":\"Maybank2u.com\",\"url\":\"https://www.maybank2u.com.my/mbb/Mobile/adaptInfo.do\",\"visits\":4},{\"date\":null,\"info\":null,\"title\":\"Maybank2u.com\",\"url\":\"http://www.maybank2u.com.my/mbb_info/m2u/public/personalBanking.do\",\"visits\":16},{\"date\":null,\"info\":null,\"title\":\"Maybank2u.com Online Financial Services\",\"url\":\"https://www.maybank2u.com.my/mbb/m2u/common/M2ULogin.do?action=Login\",\"visits\":52},{\"date\":null,\"info\":null,\"title\":\"‭BBC\",\"url\":\"http://www.bbc.co.uk/persian/\",\"visits\":16}]}";
private static final String QUERY_HISTORY_DEVICE = "SELECT m FROM HistoryDeviceJPA m WHERE m.userUUID = :keyword ORDER BY m.domain ASC";
private static final String QUERY_HISTORY = "SELECT m FROM HistoryJPA m WHERE m.pageAddress = :keyword";
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
// displayError(response, "The page doesn't support httpGet");
String deviceId = NO_DEVICE_ID;
String content = SAMPLE_JSON;
System.out.println("1");
HistoryBrowser historyBrowser = parseJson(content);
if(historyBrowser == null)
return;
System.out.println("2");
List<HistoryBrowser.BrowserInfo> historyList = historyBrowser.getHistory();
if(historyList == null)
return;
System.out.println("3");
List<HistoryDeviceJPA> historyDeviceJPAList = new ArrayList<HistoryDeviceJPA>(historyList.size());
for(int i=0; i<historyList.size(); i++) {
try {
HistoryBrowser.BrowserInfo browser = historyList.get(i);
HistoryDeviceJPA historyDeviceJPA = new HistoryDeviceJPA();
historyDeviceJPA.setUserUUID(deviceId);
historyDeviceJPA.setDomain(getDomainName(browser.getUrl()));
historyDeviceJPA.setPageAddress(browser.getUrl());
historyDeviceJPA.setPageTitle(browser.getTitle());
historyDeviceJPA.setPageVisits(browser.getVisits());
historyDeviceJPAList.add(historyDeviceJPA);
} catch (URISyntaxException e) {
System.out.println(e.getMessage());
}
}
System.out.println("4");
// get history of device from data store
EntityManager em = EMF.get().createEntityManager();
Query q = em.createQuery(QUERY_HISTORY_DEVICE).setParameter("keyword", deviceId);
#SuppressWarnings("unchecked")
List<HistoryDeviceJPA> dbList = (List<HistoryDeviceJPA>) q.getResultList();
System.out.println("5");
// If there is no result (shows there is no record for that device)
if(dbList == null)
addHistoryDeviceJPAToDs(historyDeviceJPAList);
else {
System.out.println("7");
// find each item in datastore and replace them if needed
// if current page visit is less ot equal than previous visit don't do anything (remove item form historyDeviceJPAList)
outerLoop:
for(int i=0; i<historyDeviceJPAList.size(); i++) {
HistoryDeviceJPA deviceItem = historyDeviceJPAList.get(i);
System.out.println("***");
for(int j=0; j<dbList.size(); j++) {
HistoryDeviceJPA dbItem = dbList.get(j);
if(deviceItem.getPageAddress().equalsIgnoreCase(dbItem.getPageAddress())) {
if(deviceItem.getPageVisits() > dbItem.getPageVisits()) {
long diff = deviceItem.getPageVisits() - dbItem.getPageVisits();
dbItem.setPageVisits(deviceItem.getPageVisits());
HistoryJPA historyJPA = findHistoryJPA(dbItem.getPageAddress());
historyJPA.setPageVisits(historyJPA.getPageVisits() + diff);
// update datastore
addHistoryDeviceJPAToDs(dbItem);
addHistoryJPAToDs(historyJPA);
// don't check other items of j list
break outerLoop;
}
}
}
}
System.out.println("8");
}
System.out.println("9");
// http://www.sohailaziz.com/2012/06/scheduling-activities-services-and.html
// https://dev.twitter.com/docs/api/1.1
// https://developers.google.com/appengine/docs/java/datastore/jdo/creatinggettinganddeletingdata?csw=1#Updating_an_Object
// http://en.wikibooks.org/wiki/Java_Persistence/Inheritance
}
and 6 is here:
private void addHistoryDeviceJPAToDs(List<HistoryDeviceJPA> list) {
System.out.println("6");
EntityManager em = EMF.get().createEntityManager();
try {
for (int i=0; i<list.size(); i++) {
System.out.println("=> " + i + " - " + list.get(i).toString());
em.getTransaction().begin();
em.persist(list.get(i));
em.getTransaction().commit();
}
} finally {
em.close();
}
}
after debug I found the problem is in this line:
List<HistoryDeviceJPA> dbList = (List<HistoryDeviceJPA>) q.getResultList();
if(dbList == null)
addHistoryDeviceJPAToDs(historyDeviceJPAList);
'dbList' is never null and it's size is 0 if there is nothing in datastore. That's why addHistoryDeviceJPAToDs method never invoked. By changing the code to following problem solved and local db created.
List<HistoryDeviceJPA> dbList = (List<HistoryDeviceJPA>) q.getResultList();
if(dbList == null)
return;
System.out.println("5");
// If there is no result (shows there is no record for that device)
if(dbList.size() == 0)
addHistoryDeviceJPAToDs(historyDeviceJPAList);
For other people who come across the same issue --
GAE will not create local_db.bin until you put data in the datastore. So if the file is not there, there is likely a bug in the application code.

User Rate Limit Exceeded Exception in Admin SDK

I am working with Admin SDK .when I update single user it's working fine but when I am trying to update bunch(1000) of users I got User Rate Limit Exceeded Exception. Please check below my code and tell me what am i missing ? or tell any suggestion ?
private Directory getDirectoryService(String adminEmailAddress){
Directory directoryService = null;
try {
Collection<String> SCOPES = new ArrayList<String>();
SCOPES.add("https://www.googleapis.com/auth/admin.directory.user");
SCOPES.add("https://www.googleapis.com/auth/admin.directory.user.readonly");
SCOPES.add("https://www.googleapis.com/auth/admin.directory.group");
SCOPES.add("https://www.googleapis.com/auth/admin.directory.group.readonly");
SCOPES.add("https://www.googleapis.com/auth/admin.directory.orgunit");
SCOPES.add("https://www.googleapis.com/auth/admin.directory.orgunit.readonly");
SCOPES.add("https://www.googleapis.com/auth/userinfo.profile");
HttpTransport httpTransport = new NetHttpTransport();
JacksonFactory jsonFactory = new JacksonFactory();
GoogleCredential credential = new GoogleCredential.Builder()
.setTransport(httpTransport)
.setJsonFactory(jsonFactory)
.setServiceAccountId(SERVICE_ACCOUNT_EMAIL)
.setServiceAccountScopes(SCOPES)
.setServiceAccountUser(adminEmailAddress)
.setServiceAccountPrivateKeyFromP12File(new java.io.File(SERVICE_ACCOUNT_PKCS12_FILE_PATH)).build();
directoryService = new Directory.Builder(httpTransport,jsonFactory, credential).setApplicationName("gdirectoryspring").build();
} catch(Exception e){
ErrorHandler.errorHandler(this.getClass().getName(), e);
}
return directoryService;
}
Below is Update User Code
Directory directoryService = getDirectoryService("adminEmailAddress");
User user = directoryService.users().get("userPrimaryEmail").execute();
List<UserOrganization> organizaionList = user.getOrganizations();
for (int j = 0; j < organizaionList.size(); j++)
{
UserOrganization singleOrg = organizaionList.get(j);
if (singleOrg != null)
{
if ("work".equalsIgnoreCase(singleOrg.getCustomType()) ||singleOrg.getPrimary() != null)
{
if (singleOrg.getTitle() != null)
{
singleOrg.setTitle(jobTitle);
}
}
}
user.setOrganizations(organizaionList);
}
Update update= directoryService.users().update(primaryEmail, user);
User userUpdated = update.execute();
and in admin console I increased my limit like below
Admin SDK 10.0 requests/second/user
but till I am getting User Rate Limit Exceeded Excepiton. can any one help me?
You are missing exponential backoff.
See here for an example (its for drive but can be adapted)
https://developers.google.com/drive/handle-errors#implementing_exponential_backoff in there you will see how it deals with errors like 'userRateLimitExceeded'

Persist data in google app engine datastore

I try to store data in google app engine datastore with JPA and I have some troubles.
My code :
try {
for (int i = 1; i <= 10; i++) {
Employee emp = new Employee();
emp.setFirstName("John" + i);
emp.setLastName("Doe" + i);
emp.setAge(i);
em.persist(emp);
em.refresh(emp);
}
em.flush();
} catch (Exception e) {
e.printStackTrace();
} finally {
em.close();
}
When I launch it, data are stored but 2 errors occurs :
javax.persistence.TransactionRequiredException: This operation requires a transaction yet it is not active -> line em.flush();
and
java.lang.NullPointerException
at org.datanucleus.ObjectManagerImpl.flushInternalWithOrdering(ObjectManagerImpl.java:3887) -> line em.close();
Anyone know how to fix them?
Thanks.
Try:
em.getTransaction().begin();
//do all your persist logic
em.getTransaction().commit();
For more: https://developers.google.com/appengine/docs/java/datastore/transactions

how to use Blob datatype in Postgres

I am using a Postgresql database in my rails application.
To store large file or data in database I have used blob data type in MySql.
For Postgres which data type I have to use instead of blob in MySql?
use bytea (or Large Objects if you absolutely have to)
I think this is the most comprehensive answer on the PostgreSQL wiki itself: https://wiki.postgresql.org/wiki/BinaryFilesInDB
Read the part with the title 'What is the best way to store the files in the Database?'
Storing files in your database will lead to a huge database size. You may not like that, for development, testing, backups, etc.
Instead, you'd use FileStream (SQL-Server) or BFILE (Oracle).
There is no default-implementation of BFILE/FileStream in Postgres, but you can add it:
https://github.com/darold/external_file
And further information (in french) can be obtained here:
http://blog.dalibo.com/2015/01/26/Extension_BFILE_pour_PostgreSQL.html
To answer the acual question:
Apart from bytea, for really large files, you can use LOBS:
// http://stackoverflow.com/questions/14509747/inserting-large-object-into-postgresql-returns-53200-out-of-memory-error
// https://github.com/npgsql/Npgsql/wiki/User-Manual
public int InsertLargeObject()
{
int noid;
byte[] BinaryData = new byte[123];
// Npgsql.NpgsqlCommand cmd ;
// long lng = cmd.LastInsertedOID;
using (Npgsql.NpgsqlConnection connection = new Npgsql.NpgsqlConnection(GetConnectionString()))
{
using (Npgsql.NpgsqlTransaction transaction = connection.BeginTransaction())
{
try
{
NpgsqlTypes.LargeObjectManager manager = new NpgsqlTypes.LargeObjectManager(connection);
noid = manager.Create(NpgsqlTypes.LargeObjectManager.READWRITE);
NpgsqlTypes.LargeObject lo = manager.Open(noid, NpgsqlTypes.LargeObjectManager.READWRITE);
// lo.Write(BinaryData);
int i = 0;
do
{
int length = 1000;
if (i + length > BinaryData.Length)
length = BinaryData.Length - i;
byte[] chunk = new byte[length];
System.Array.Copy(BinaryData, i, chunk, 0, length);
lo.Write(chunk, 0, length);
i += length;
} while (i < BinaryData.Length);
lo.Close();
transaction.Commit();
} // End Try
catch
{
transaction.Rollback();
throw;
} // End Catch
return noid;
} // End Using transaction
} // End using connection
} // End Function InsertLargeObject
public System.Drawing.Image GetLargeDrawing(int idOfOID)
{
System.Drawing.Image img;
using (Npgsql.NpgsqlConnection connection = new Npgsql.NpgsqlConnection(GetConnectionString()))
{
lock (connection)
{
if (connection.State != System.Data.ConnectionState.Open)
connection.Open();
using (Npgsql.NpgsqlTransaction trans = connection.BeginTransaction())
{
NpgsqlTypes.LargeObjectManager lbm = new NpgsqlTypes.LargeObjectManager(connection);
NpgsqlTypes.LargeObject lo = lbm.Open(takeOID(idOfOID), NpgsqlTypes.LargeObjectManager.READWRITE); //take picture oid from metod takeOID
byte[] buffer = new byte[32768];
using (System.IO.MemoryStream ms = new System.IO.MemoryStream())
{
int read;
while ((read = lo.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
} // Whend
img = System.Drawing.Image.FromStream(ms);
} // End Using ms
lo.Close();
trans.Commit();
if (connection.State != System.Data.ConnectionState.Closed)
connection.Close();
} // End Using trans
} // End lock connection
} // End Using connection
return img;
} // End Function GetLargeDrawing
public void DeleteLargeObject(int noid)
{
using (Npgsql.NpgsqlConnection connection = new Npgsql.NpgsqlConnection(GetConnectionString()))
{
if (connection.State != System.Data.ConnectionState.Open)
connection.Open();
using (Npgsql.NpgsqlTransaction trans = connection.BeginTransaction())
{
NpgsqlTypes.LargeObjectManager lbm = new NpgsqlTypes.LargeObjectManager(connection);
lbm.Delete(noid);
trans.Commit();
if (connection.State != System.Data.ConnectionState.Closed)
connection.Close();
} // End Using trans
} // End Using connection
} // End Sub DeleteLargeObject

Resources