Grails database migration plugin - Java heap space - database

I am running grails 1.3.7 and using the grails database migration plugin version database-migration-1.0
The problem I have is I have a migration change set. That is pulling blobs out of a table and writing them to disk. When running through this migration though I am running out of heap space. I was thinking I would need to flush and clear the session to free up some space however I am having difficulty getting access to the session from within the migration. BTW The reason it's in a migration is we are moving away from storing files in oracle and putting them on disk
I have tried
SessionFactoryUtils.getSession(sessionFactory, true)
I have also tried
SecurityRequestHolder.request.getSession(false) //request in null -> not surprising
changeSet(author: "userone", id: "saveFilesToDisk-1") {
grailsChange{
change{
def fileIds = sql.rows("""SELECT id FROM erp_file""")
for (row in fileIds) {
def erpFile = ErpFile.get(row.id)
erpFile.writeToDisk()
session.flush()
session.clear()
propertyInstanceMap.get().clear()
}
ConfigurationHolder.config.erp.ErpFile.persistenceMode = previousMode
}
}
}
Any help would be greatly appreciated.

The application context will be automatically available in your migration as ctx. You can get the session like this:
def session = ctx.sessionFactory.currentSession

To access session, you can use withSession closure like this:
Book.withSession { session ->
session.clear()
}
But, this may not be the reason why your app run out of heap space. If the data volume is large, then
def fileIds = sql.rows("""SELECT id FROM erp_file""")
for (row in fileIds) {
..........
}
will consume up your space. Try to process the data with pagination. Don't load all the data at once.

Related

Shrine clear cached images on persistence success

Background
I am using file system storage, with the Shrine::Attachment module in a model setting (my_model), with activerecord (Rails). I am also using it in a direct upload scenario, therefore i need the response from the file upload (save to cache).
my_model.rb
class MyModel < ApplicationRecord
include ImageUploader::Attachment(:image) # adds an `image` virtual attribute
omitted relations & code...
end
my_controller.rb
def create
#my_model = MyModel.new(my_model_params)
# currently creating derivatives & persisting all in one go
#my_model.image_derivatives! if #my_model.image
if #my_model.save
render json: { success: "MyModel created successfully!" }
else
#errors = #my_model.errors.messages
render 'errors', status: :unprocessable_entity
end
Goal
Ideally i want to clear only the cached file(s) I currently have hold of in my create controller as soon as they have been persisted (the derivatives and original file) to permanent storage.
What the best way is to do this for scenario A: synchronous & scenario B: asynchronous?
What i have considered/tried
After reading through the docs i have noticed 3 possible ways of clearing cached images:
1. Run a rake task to clear cached images.
I really don't like this as i believe the cached files should be cleaned once the file has been persisted and not left as an admin task (cron job) that cant be tested with an image persistence spec
# FileSystem storage
file_system = Shrine.storages[:cache]
file_system.clear! { |path| path.mtime < Time.now - 7*24*60*60 } # delete files older than 1 week
2. Run Shrine.storages[:cache] in an after block
Is this only for background jobs?
attacher.atomic_persist do |reloaded_attacher|
# run code after attachment change check but before persistence
end
3. Move the cache file to permanent storage
I dont think I can use this as my direct upload occurs in two distinct parts: 1, immediately upload the attached file to a cached store then 2, save it to the newly created record.
plugin :upload_options, cache: { move: true }, store: { move: true }
Are there better ways of clearing promoted images from cache for my needs?
Synchronous solution for single image upload case:
def create
#my_model = MyModel.new(my_model_params)
image_attacher = #my_model.image_attacher
image_attacher.create_derivatives # Create different sized images
image_cache_id = image_attacher.file.id # save image cache file id as it will be lost in the next step
image_attacher.record.save(validate: true) # Promote original file to permanent storage
Shrine.storages[:cache].delete(image_cache_id) # Only clear cached image that was used to create derivatives (if other images are being processed and are cached we dont want to blow them away)
end

Number of pending changes to be replicated by PouchDB

My app uses PouchDB (and Ionic 1) to replicate it's data from a local DB to a server DB as soon as the network is available (through a live&retry replication).
I would like to display on the screen the number of changes waiting for replication (including 0 when everything has been replicated).
Is there some way to do that with PouchDB?
(If this is not feasible, a fallback solution would be to have a "dirty" flag, meaning that everything is replicated or not. Any idea for this?)
Thanks in advance!
Here is the way I did it (it's only the 'fallback' solution):
PouchDB.replicate(localDb, remoteDb, options).
on('paused', function (info) {
if (info == undefined) {
// the replication has finished and is waiting for other changes
$rootScope.syncStatus = "pristine";
}
else { // the are some pending changes to be replicated remotely
$rootScope.syncStatus = "dirty";
}
})

Import entities from local storage when using ASP.NET WebApi OData not loading extra metadata

When I try to save imported entities from local storage it thrown exception here.
var extraMetadata = aspect.extraMetadata;
var uri = extraMetadata.uri || extraMetadata.id;
if (core.stringStartsWith(uri, baseUri)) {
uri = routePrefix + uri.substring(baseUri.length);
}
request.requestUri = uri;
if (extraMetadata.etag) {
request.headers["If-Match"] = extraMetadata.etag;
}
But if I get data from OData service directly it is saving correctly. Anything I am missing when importing data from local storage. I tried this solution but it didn't help me.
This is a bug we are tracking (#2574). I was hoping we'd fix it for v.1.4.12 but it looks like it will have to wait a cycle.
There is no good workaround. You can try to remember the extraMetadata yourself (in some sideband storage) and re-attach it when you re-import. Not fun I know. Sorry.

How are my tests not affecting the database...?

I recently switched from using regular old tests to using WebTest and this "No Database Test Runner"
from django.test.simple import DjangoTestSuiteRunner
class NoTestDbDatabaseTestRunner(DjangoTestSuiteRunner):
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
Here's an example test which HAS to be hitting the database somehow...
What is happening? Are my tests hitting the database but rolling back to some old state? Test-to-test I can see that each created listing has an incremented id.
def test_image_upload(self):
form_data = self.listing_form_defaults.copy()
form_data['images-TOTAL_FORMS'] = '3'
upload_files = [
('images-0-image', 'testdata/1.png'),
('images-1-image', 'testdata/2.png'),
('images-2-image', 'testdata/3.png'),
]
form_resp = self.app.post(
reverse('listing_create'),
form_data,
upload_files=upload_files,
user='kmike'
).follow()
assert len(form_resp.context['listing'].images.all()) == 3
form_resp.context['listing'].images.all() HAS to be hitting the database, I print'd it and it had database records from my database.
I'm just confused--my tests run blazing fast and don't seem to actually change my database, how is this working/happening?!
Tests that require a database (namely, model tests) will not use your “real” (production) database. Separate, blank databases are created for the tests.
See Here

Velocity caching and IS

I want to know the relation between velocity and IS. If a request is satisfied by velocity, then will it going to use worker process. Or what happen I’m confused. ?
Also I want to store some data like country, state and city for auto suggest in velocity. This database could be on 3 gb. Now how velocity will work. And how IS will work. Is this going to effect IS. Basically my requirements is that I want to save all country, state and city data in velocity and don’t want to hit database and don’t want to make IS busy. What is the solution?
Please help
Velocity was the codename for Microsoft's AppFabric distributed caching technology. Very similar to memcache, it is used for caching objects across multiple computers.
This has no real bearing on how IIS processes requests. All requests are satisfied by IIS, AppFabric is a mechanism for storing data, not processing requests.
In answer to your second question; You can use AppFabric is a first-call check for data. If the data does not exist in the cache, call the database to populate the cache, and then return the data.
var factory = DataCacheFactory();
var cache = factory.GetCache("AutoSuggest");
List<Region> regions = cache.Get("Regions") as List<Region>;
if (regions == null) {
regions = // Get regions from database.
cache.Add("Regions", regions);
}
return regions;
Checking the cache first enables the app to get a faster response, as the database is only hit on the first instance (ideally), and the result data is pushed back into the cache.
You could wrap this up a bit more:
public T Get<T>(string cacheName, string keyName, Func<T> itemFactory)
where T : class
{
var cache = dataFactory.GetCache(cacheName);
T value = cache.Get(keyName) as T;
if (value == null) {
value = itemFactory();
cache.Add(keyName, value);
}
return value;
}
That way you can change your lookup calls to something similar to:
var regions = Get<List<Region>>("AutoSuggest", "Regions", () => GetRegions());

Resources