How to update loopback models with angularjs? - angularjs

I updated the lbservice by the command line
lb-ng server/server.js client/js/services/lb-services.js
and it deleted me some functions in lb-service.js file, like logout.
what can i do?

If you have modified the generated service and launch the command again, your changes have been overwrited. If you have not saved your changes, they are unfortunatelly lost.
Frop Loopback docs:
Generate lb-services.js
To generate the Angular services for a LoopBack application, use the
AngularJS SDK lb-ng command-line tool. First, create the
client/js/services directory, if you don’t already have it (by using
the mkdir command, for example), then in the project root directory,
enter the lb-ng command as follows:
$ mkdir -p client/js/services
$ lb-ng server/server.js client/js/services/lb-services.js
This command creates client/js/services/lb-services.js.

Related

Copy a docker ARG into an Angularjs config file

I have a simple AngularJS application that is built through a Jenkins pipeline and a Docker file. When running the Jenkins job, the environment is set. Then it builds to one of two environments: dev or integration. What I need is a way to get that variable into the angular app.
The docker file uses the environment to build different config settings like:
ARG env
COPY build_config/${env} /opt/some/path...
I need to get that env into one of the controllers. Is there a way to copy env into a controller. I attempted something like the following:
COPY ${env} path/to/angular/file/controller
I have searched and tried different methods but cannot find a solution to work for the Jenkins with Docker pipeline.
You can just use RUN to write a string to a file:
RUN echo "$env" > path/to/angular/file/controller
If you want to append to the file instead of overwritting it, use
RUN echo "$env" >> path/to/angular/file/controller

How we can execute a Jupyter notebook python script automatically in Sagemaker?

I used terraform to create Sagemaker notebook instance and deploy Jupyter notebook python script to create and deploy a regression model.
I was able to run the scribe and create the model successfully via AWS console manually. However, I could not find a way to get it executed automatically. I even tried executing the script via shell commands through notebook instance’s lifecycle configuration. However, it did not work as expected. Any other idea please?
Figured this out. Passed the below script to notebook instance as lifecycle configuration.
#!/bin/sh
sudo -u ec2-user -i <<'EOF'
source activate python3
pip install runipy
nohup runipy <<path_to_the_jupyter_notebook>> &
source deactivate
EOF

I'm new to command line. I get a lot of messages like 'command not found' and 'no such file or directory'

Trying to run gcloud init to initialize the Google App Engine Engine SDK by typing ./google-cloud-sdk/bin/gcloud init but it showed: no such file or directory or command not found. Is something wrong with my PATH? My path is:
/Users/AnneLutz/Documents/google-cloud-sdk\
If you typing ./google-cloud-sdk/bin/gcloud init and you installed Cloud SDK in /Users/AnneLutz/Documents/google-cloud-sdk, then your current directory should be /Users/AnneLutz/Documents in order for what you type to work.
That said you should add /Users/AnneLutz/Documents/google-cloud-sdk/bin to you path. To do this, assuming you are using bash you can
source /Users/AnneLutz/Documents/google-cloud-sdk/path.bash.inc
To make it so that every-time you start your shell you can add it to shell profile. For example you can add above source command at the end of ~/.bash_profile file.
It looks like you used the option to download the SDK zip file and are then trying to configure your environment with that download option. If you aren't comfortable with setting environment variables, you might want to instead try installing using the "interactive" installer, which will automate the steps for making the commands always available on your system.
The directions are here, but for Mac OS users are basically:
Enter the following at a command prompt:
curl https://sdk.cloud.google.com | bash
Restart your shell:
exec -l $SHELL
Run gcloud init to initialize the gcloud environment:
gcloud init
For many, this procedure is easier than getting everything configured manually.

how to override the already existing workspaces in rtc using command scm or lscm

I have the requirement as i need to connect to the rtc and automatically checkout the files from the stream to the repository workspace.
I am writing the following commands in the bat file.
lscm login -r https://rtc.usaa.com/ccm -u uname -P password -n nickname -c
scm create workspace (workspacename) -r nickname -s (streamname)
lscm load workspace name -r nickname -d directorypath(c:codebase/rtc)
lscm logout -r nickname
while i am executing the above batch file for the first time it is creating the workspace and loading the project into the workspace path.
while i am executing the above batch file for the second time again it is creating the duplicate workspace with the same name and getting exception while loading.
I want to override the already existing workspace every time while loading but I didn't find a command for that.
can you please provide me any other way of doing it or any command that solves my problem
It will be good to delete existing local workspace sandbox before loading the new one. In my setup, we execute the following steps:
1. Delete local sandbox (if it makes sense delete existing repository workspace too)
2. Create new repository workspace
3. Load the new repository workspace to local sandbox
Either create a uniquely named workspace (perhaps by sticking a time stamp into the name?) and then delete it when you're done, or use the workspace's UUID from the creation step.
Instead of deleting and again writing the files into workspace, you can try accept incoming changes before load and then using "--force" attribute you can overwrite only the changes made files.
Accept using - SCM accept --flow-components -r <> -u <> -p <> --target
Use force at the end of the load command which you using.
this should work fine.

Sencha: How to generate all-classes file

Morning,
My production build seems to be missing getOrientation function.
It seems that sencha-touch-all.js is not being copied into the build folder.
After doing much forum reading, etc, I have discovered that I actually need to use Cmd to create an all-classes.js file.
According to http://docs.sencha.com/touch/2.2.0/#!/guide/building, the following command should do the job:
sencha create jsb -a index.html -p app.jsb3
When I run this in command from within the root of my app (where index.html lives), I get the following error:
[ERR] Unknown command: "create"
I have tried using commands generate or build instead of create but they do not work either.
So, why does it not recognise that command?
When I run the command from within my SenchaSDKTools folder, but use the full path/to/app,
it seems to accept the command, but does not create a file.
I have sencha touch 2.2.1 and Cmd 3.1.2.342
In order for you to use sencha create jsb command, you have to:
Install Sencha SDK Tools
Mac - http://cdn.sencha.io/sdk-tools/SenchaSDKTools-2.0.0-beta3-osx.app.zip
Windows - http://cdn.sencha.io/sdk-tools/SenchaSDKTools-2.0.0-beta3-windows.exe
Open terminal and change directory to where the Sencha SDK is installed.
cd /Applications/SenchaSDKTools-2.0.0-beta3/
Generate a JSB3 file by executing the ff. command:
/sencha create jsb -a index.html -p app.jsb3
wherein:
-a (required) The location of the HTML file containing the scripts you want to include
-p (required) The location where the output .jsb3 file should be created
This scans your index.html file for all framework and application files that are actually used by the app, and then creates a JSB file called app.jsb3.
SOLVED:
It was working fine from within the SenchaSDKtools folder. I hadn't told it the right place to create the file.

Resources