I saw the below instructions in the README file of XCGLogger github page.
"Another common usage pattern is to have multiple loggers, perhaps one for UI issues, one for networking, and another for data issues.
Each log destination can have its own log level. As a convenience, you can set the log level on the log object itself and it will pass that level to each destination. Then set the destinations that need to be different."
I think that's very useful and meaningful to use XCGLogger. Could any expert show an demo about how to add multiple destinations with different purpose. Or I need to use multiple log objects?
Yes, you would use different log objects in that case.
Based on the example of advanced usage in the readme, you could do something like this:
// Create a logger for UI related events
let logUI = XCGLogger(identifier: "uiLogger", includeDefaultDestinations: false)
// Create a destination for the system console log (via NSLog)
let systemLogDestination = XCGNSLogDestination(owner: logUI, identifier: "uiLogger.systemLogDestination")
// Optionally set some configuration options
systemLogDestination.outputLogLevel = .Debug
systemLogDestination.showLogIdentifier = false
systemLogDestination.showFunctionName = true
systemLogDestination.showThreadName = true
systemLogDestination.showLogLevel = true
systemLogDestination.showFileName = true
systemLogDestination.showLineNumber = true
systemLogDestination.showDate = true
// Add the destination to the logger
logUI.addLogDestination(systemLogDestination)
// Create a logger for DB related events
let logDB = XCGLogger(identifier: "dbLogger", includeDefaultDestinations: false)
// Create a file log destination
let fileLogDestination = XCGFileLogDestination(owner: logDB, writeToFile: "/path/to/file", identifier: "advancedLogger.fileLogDestination")
// Optionally set some configuration options
fileLogDestination.outputLogLevel = .Verbose
fileLogDestination.showLogIdentifier = false
fileLogDestination.showFunctionName = true
fileLogDestination.showThreadName = true
fileLogDestination.showLogLevel = true
fileLogDestination.showFileName = true
fileLogDestination.showLineNumber = true
fileLogDestination.showDate = true
// Add the destination to the logger
logDB.addLogDestination(fileLogDestination)
// Add basic app info, version info etc, to the start of the logs
logUI.logAppDetails()
logDB.logAppDetails()
// Add database version to DB log
logDB.info("DB Schema Version 1.0")
This creates two log objects, one for UI events with a Debug level, one for DB events with a Verbose level.
Related
I have a typical ngrx-data arrangement of 'User' entities linked to db.
I implement the standard service to handle the data:
#Injectable({providedIn: 'root'})
export class UserService extends EntityCollectionServiceBase<UserEntity> {
constructor(serviceElementsFactory: EntityCollectionServiceElementsFactory) {
super('User', serviceElementsFactory);
}
}
I read the data using:
this.data$ = this.userService.getAll();
this.data$.subscribe(d => { this.data = d; ... }
Data arrives fine. Now, I have a GUI / HTML form where user can make changes and update them. It also works fine. Any changes user makes in the form are updated via:
this.data[fieldName] = newValue;
This updates the data and ngrx-data automatically updates the entity cache.
I want to implement an option, where user can decide to cancel all changes before they are written to the db, and get the initial data before he made any adjustments. However, I am somehow unable to overwrite the cached changes.
I tried:
this.userService.clearCache();
this.userService.load();
also tried to re-call:
this.data$ = this.userService.getAll();
but I am constantly getting the data from the cache that has been changed by the user, not the data from the db. In the db I see the data not modified. No steps were taken to write the data to db.
I am not able to find the approach to discard my entity cache and reload the original db data to replace the cached values.
Any input is appreciated.
You will need to subscribe to the reassigned observable when you change this.data$, but it will be a bit messy.
First you bind this.data$ via this.data$ = this.userService.entities$, then no matter you use load() or getAll(), as long as the entities$ changed, it fire to this.data$.subscribe(). You can even skip the load and getAll if you already did that in other process.
You can then use the clearCache() then load() to reset the cache.
But I strongly recommand you to keep the entity data pure. If the user exit in the middle without save or reset, the data is changed everywhere you use this entity.
My recommand alternatives:
(1) Use angular FormGroup to set the form data with entity data value, then use this setting function to reset the form.
(2) Make a function to copy the data, then use this copy function as reset.
For example using _.cloneDeep.
(2.1) Using rxjs BehaviourSubject:
resetTrigger$ = new BehaviourSubject<boolean>(false);
ngOnInit(){
this.data$ = combineLastest([
this.resetTrigger$,
this.userService.entities$
]).subscribe([trigger, data])=>{
this.data = _.cloneDeep(data)
});
// can skip if already loaded before
this.userService.load();
}
When you want to reset the data, set a new value to the trigger
resetForm(){
this.resetTrigger$.next(!this.resetTrigger$.value)
}
(2.2) Using native function (need to store the original):
this.data$ = this.userService.entities$.pipe(
tap(d=>{
this.originData = d;
resetForm();
})
).subscribe()
resetForm:
resetForm:()=>{
this.data = _.cloneDeep(this.originData);
}
I have already deployed my VPC via this module listed below before I added a count.
This worked just fine, however do to changes in our infrastructure, I need to add a count to the module
module "core_vpc" {
source = "./modules/vpc"
count = var.environment == "qa" || var.environment == "production" ? 1 : 0
aws_region = var.aws_region
environment = var.environment
system = var.system
role = var.system
vpc_name = var.system
vpc_cidr = var.vpc_cidr
ssh_key_name = var.ssh_key_name
ssh_key_public = var.ssh_key_public
nat_subnets = var.nat_subnets
nat_azs = var.vpc_subnet_azs
}
Now Terraform wants to update my state file and destroy much of my configuration and replace it with what is shown in the example below. This is of course not just limited to route association, but all resources created within the module.I can't let this happen as I have production running and not want to mess with that.
module.K8_subnets.aws_route_table_association.subnet[0] will be destroyed
and replace it with:
module.K8_subnets[0].aws_route_table_association.subnet[0] will be created
Is there a way of preventing Terraform of making these changes? Short of changing it manually in the tf-state.
All I want is the for the VPC not to be deployed in DEV.
Thanks.
You can "move" the terraform state using tf state mv src target. Specifically you can move the old non-counter version into the new counted version at index 0:
terraform state mv 'module.K8_subnets' 'module.K8_subnets[0]'
This works for individual resources as well as for entire modules. And it works for for_each resource as well, there you would not have an index but a key to move to. And this even works the other way around, if you remove the count / for_each but want to still keep the resource(s).
Background
I am using file system storage, with the Shrine::Attachment module in a model setting (my_model), with activerecord (Rails). I am also using it in a direct upload scenario, therefore i need the response from the file upload (save to cache).
my_model.rb
class MyModel < ApplicationRecord
include ImageUploader::Attachment(:image) # adds an `image` virtual attribute
omitted relations & code...
end
my_controller.rb
def create
#my_model = MyModel.new(my_model_params)
# currently creating derivatives & persisting all in one go
#my_model.image_derivatives! if #my_model.image
if #my_model.save
render json: { success: "MyModel created successfully!" }
else
#errors = #my_model.errors.messages
render 'errors', status: :unprocessable_entity
end
Goal
Ideally i want to clear only the cached file(s) I currently have hold of in my create controller as soon as they have been persisted (the derivatives and original file) to permanent storage.
What the best way is to do this for scenario A: synchronous & scenario B: asynchronous?
What i have considered/tried
After reading through the docs i have noticed 3 possible ways of clearing cached images:
1. Run a rake task to clear cached images.
I really don't like this as i believe the cached files should be cleaned once the file has been persisted and not left as an admin task (cron job) that cant be tested with an image persistence spec
# FileSystem storage
file_system = Shrine.storages[:cache]
file_system.clear! { |path| path.mtime < Time.now - 7*24*60*60 } # delete files older than 1 week
2. Run Shrine.storages[:cache] in an after block
Is this only for background jobs?
attacher.atomic_persist do |reloaded_attacher|
# run code after attachment change check but before persistence
end
3. Move the cache file to permanent storage
I dont think I can use this as my direct upload occurs in two distinct parts: 1, immediately upload the attached file to a cached store then 2, save it to the newly created record.
plugin :upload_options, cache: { move: true }, store: { move: true }
Are there better ways of clearing promoted images from cache for my needs?
Synchronous solution for single image upload case:
def create
#my_model = MyModel.new(my_model_params)
image_attacher = #my_model.image_attacher
image_attacher.create_derivatives # Create different sized images
image_cache_id = image_attacher.file.id # save image cache file id as it will be lost in the next step
image_attacher.record.save(validate: true) # Promote original file to permanent storage
Shrine.storages[:cache].delete(image_cache_id) # Only clear cached image that was used to create derivatives (if other images are being processed and are cached we dont want to blow them away)
end
I have a requirement where I need to store local settings of user into some cache object. I tried to implement this using $cacheFactory e.g.
var userCache = $cacheFactory('users');
However, when my code hit this line again, it gives me following error:
Error : cacheId 'users' is already taken !
I am not sure, how to check if this ID is already exists, because I need to fetch settings from this cache object on each time component loads.
It's actually listed how to do this on the documentation page:
The $cacheFactory() function is not a "get or create" call, it's only a create.
This is how you would check if the cache has already been created:
if (!$cacheFactory.get('users')) {
var userCache = $cacheFactory('users');
}
which can be changed to
var userCache = $cacheFactory.get('users') || $cacheFactory('users');
I'm working on an application using Play and Slick. This app requires access to (at least) two databases and this is working fine when one is defined as default and other is named. Eg.,
db.default.driver = "com.mysql.jdbc.Driver"
db.default.url = "jdbc:mysql://localhost:3306/db1"
db.db2.driver = "com.mysql.jdbc.Driver"
db.db2.url = "jdbc:mysql://localhost:3306/db2"
I can then happily access each db as follows
DB.withSession { implicit session => ??? }
DB("db2").withSession { implicit session => ??? }
However, this doesn't really make sense as there is no reason DB1 should be the default. The DBs contain different types of data, neither is the default, both are important. What I would like is:
db.db1.driver = "com.mysql.jdbc.Driver"
db.db1.url = "jdbc:mysql://localhost:3306/db1"
db.db2.driver = "com.mysql.jdbc.Driver"
db.db2.url = "jdbc:mysql://localhost:3306/db2"
Play-scala barfs at this thought. It needs a default db driver and URL and it needs to be able to connect to it.
Anyone know anyway to change this behaviour or to trick play into thinking it has a default?
UPDATE
To be clear, I've greped my code to ensure that I'm not using DB.withSession anywhere. That is, every time I create a session I use DB("db1").withSession or DB("db2").withSession. However, when I run my test, I still get an exception:
Caused by: Configuration error: Configuration error[Slick error : jdbc driver not defined in application.conf for db.default.driver key]
Something somewhere is trying to load the default config.
Default is just a name, with some convenience functions (withSession and withTransaction without name), so, no you do not need to have a default connection if it does not fit your project.