I understand that the dispatch.xml that resides in the default service WEB-INF/ is the one that the appengine pays attention to. However, when I do an appengine:update (java, mvn) the routing rules don't seem to update. I actually have to do a separate appengine:update_dispatch to effect the changes. Do I misunderstand something or am doing something incorrectly? Thanks.
I'd say it's an understanding problem. You appear to be expecting a single operation. It's not.
Updating the default service app code and updating the dispatch rules (an app-level config affecting all app services) are distinct, independently executable operations, mapped as such in mvn.
From Uploading the dispatch file:
To upload the dispatch file, use the appcfg update_dispatch
command, and specify the war directory for the default service. Be
sure that all the services mentioned in the file have already been
uploaded before using this command. # cd to the war directory
containing the default service appcfg.sh update_dispatch .
You can also upload the dispatch file at the same time you upload one
or more services, by adding the optional auto_update_dispatch flag,
which can be used in two forms:
appcfg.sh --auto_update_dispatch update <app-directory>|<files...>
appcfg.sh -D update <app-directory>|<files...>
I guess it's possible to create a single mapping as well, under the hood using that --auto_update_dispatch flag, but IMHO it'd be even more confusing and you'd still need to remember 2 separate cmds (I wouldn't like to potentially affect other running services by a dispatch update when I'm uploading a particular service).
Related
Thanks to JMX (java console), I try to restart a route with a file component consumer endpoint.
from("file:<some dir>?noop=true")
I am using the wiretap pattern to record the intermediate data transformation through other files endpoint.
On first start of the camel application, everything is fine, and all the files already present in the input directory are polled and processed.
But when I try to restart the route thanks to jmx, nothing happens.
I try to manually removed .camel directory - created by I guess the default FileIdempotentRepository - before restarting the route, in vain.
I also tried to change the kind of IdempotentRepository with a MemoryIdempotentRepository :
from("file:<somedir>?noop=true").idempotentConsumer(header("CamelFileName"), MemoryIdempotentRepository.memoryIdempotentRepository(1000))
Even if I trigger the clear() operation of this MemoryIdempotentRepository before restarting the route in java console, nothing is polled from the input directory after restarting.
If I add a file, it works. Everything behaves like if there is a persistent history of the files already polled once.
I wonder if the use of the option "noop=true" creates an unmanaged idempotent repository I cannot control with jmx.
If true, the file is not moved or deleted in any way. This option is
good for readonly data, or for ETL type requirements. If noop=true,
Camel will set idempotent=true as well, to avoid consuming the same
files over and over again.
Any idea ?
(i am using camel-core 2.21)
I found the solution to my issue.
I made a bad use of idempotentConsumer; I needed to initialize the endpoint idempotent consumer inside the endpoint URI parameters list.
First, create an entry in a bean registry:
registry.put("myIdempotentRepository", MemoryIdempotentRepository.memoryIdempotentRepository(1000));
Then, refer to this idempotentRepository in the endpoint:
from("file:<somedire>noop=true&initialDelay=10&delay=1000&idempotentRepository=#myIdempotentRepository")
By doing this, GenericFileEndPoint:
will not create a default idempotentRepository
will add the idempotentRepository given in options of the endpoint to the services of the camel context. This means that it will be possible to manage it thanks to JMX
I think it would be useful to be allowed to manage the default idempotent repository in the FileEndPoint class of camel-core.
as you know We have a file for gitlab ci configuration named '.gitlab-ci.yml'
and this file shouldn't be edited by any developers so I decided to avoid developers to edit it.
the thing is gitlab said you can lock file to being edited but the prerequirement of this action is to have a premium account.
what can I do when I haven't premium account?
do you have any idea to lock a file to being edited?
Check if you have access to a Push Rule feature, which is a kind of pre-receive hook.
Or you can set a pre-receive hook if your GitLab server is on-premise.
In both cases, you can list the files being pushed in that hook, and fails if one of them is .gitlab-ci.yml.
As of today, the official way (~workaround~) for this seems to be creating a different repository for the .yml file with more restrict permissions and then referencing that .yml file from your project:
A .gitlab-ci.yml may contain rules to deploy an application to the production server. This deployment usually runs automatically after pushing a merge request. To prevent developers from changing the .gitlab-ci.yml, you can define it in a different repository. The configuration can reference a file in another project with a completely different set of permissions (similar to separating a project for deployments). In this scenario, the .gitlab-ci.yml is publicly accessible, but can only be edited by users with appropriate permissions in the other project.
https://docs.gitlab.com/ee/ci/environments/deployment_safety.html#protect-gitlab-ciyml-from-change
Also, there is a discussion on this matter here:
https://gitlab.com/gitlab-org/gitlab/-/issues/15632
I tried to make a file on heroku using PHP code:
$fh = fopen("../DownloadFiles/".$filename,'a');
fwrite($fh,$s);
but the file has not been created and it is not showing any error. Please help.
This should work just fine, but are you aware that if you're running multiple dynos, that file will exist only on the dyno that served that one request, and not on all the others?
Also, Dynos restart every 24 hours, and their state is reset every time you push a change to Heroku, so you cannot store persistent information on them; that's called an ephemeral filesystem.
You could, for instance, store uploaded files on Amazon S3, like described in the docs: https://devcenter.heroku.com/articles/s3-upload-php
Two remarks about your original issue:
you're probably running an old version of CakePHP which mangles all internal PHP error handling and writes it out to a log file (if you're lucky), so you can't see anything in heroku logs, and it's not possible to configure it otherwise; upgrade to a more recent version that lets you log to streams and then use php://stderr as the destination
in general, if you want to write to a file in PHP, you can just do file_put_contents($filename, $contents)...
Does the DownloadFiles folder exist in the deployment? Node fs gives error if the directory is not found. You can add a snippet to check if dir exists and if not then create. You can use fs.exists and fs.mkdir.
For more info http://nodejs.org/api/fs.html
Actually what I'm trying to implement is that, I have to access appengine datastore remotely using remote_api_shell.py. But the problem I'm facing is I'm able to logging in but couldn't access the entitys modules in my app. the steps for the procedure is not clear anywhere so I'm not able to proceed further.
I referred the articles
https://developers.google.com/appengine/docs/python/tools/remoteapi and
https://developers.google.com/appengine/articles/remote_api
They have used a command like
python $GAE_SDK_ROOT/remote_api_shell.py -s your_app_id.appspot.com
I dont know where to type it. I used command prompt for which i modified the above as
c:\program files(x86)\google\google_appengine\python remote_api_shell.py -s your_app_id.appspot.com
Its logging in. I'm able to save some entities in my datastore but unable to access my modules. i think there is a some kind of directories i want to specify or there are steps i have to follow before this which I might have missed. So i looking forward for Some Help to achieve it successfully.
Thanks.
First off cd to your application directory.
The run the remote shell as per the docs
python $GAE_SDK_ROOT/remote_api_shell.py -s your_app_id.appspot.com
If you use appengine_config.py to set up all your paths manually import that into the shell.
Other wise you should be able to import any modules etc that are defined at the root level of your application directory.
From the tutorial, which I confirmed by creating a simple project, the index.yaml file is auto-generated when a query is run. What I further observe is that until then the admin console (http://localhost:8080/_ah/admin/datastore) does not show the data-store.
My problem is this: I have a project for which data/entities are to be added manually through the datastore admin console. The website is only used to display/retrieve data, not to add data to the data-store.
How do I get my data-store to appear on the console so I can add data?
Yes, try retrieving from the empty data-store through the browser just so I can get the index.yaml to populate, etc. But that does not work.
The easiest way is probably just to create a small python script inside your project folder and create your entities in script. Assign it to a URL handler that you'll use once, then disable.
You can even do it from the python shell. It's very useful for debugging, but you'll need to set it up once.
http://alex.cloudware.it/2012/02/your-app-engine-app-in-python-shell.html
In order to do the same on production, use the remote_api:
https://developers.google.com/appengine/articles/remote_api
This is a very strange question.
The automatic creation of index.yaml only happens locally, and is simply to help you create that file and upload it to AppEngine. There is no automatic creation or update of that file once it's on the server: and as the documentation explains, no queries can be run unless the relevant index already exists in index.yaml.
Since you need indexes to run queries, you must create that file locally - either manually, or by running the relevant queries against your development datastore - then upload it along with your app.
However, this has nothing at all to do with whether the datastore viewer appears in the admin. Online, it will always show, but only entity kinds that actually have an instance in the store will be shown. The datastore viewer knows nothing about your models, it only knows about kinds that exist in the datastore.
On your development server you can use the interactive console to create/instantiate/save an entity, which should cause the entity class to appear in the datastore interface, like so:
from google.appengine.ext import ndb
class YourEntityModel(ndb.Model):
pass
YourEntityModel().put()