Starting SFDX: Deploy Source to Org - Slow, Really Slow, In fact it never finished - salesforce

I was doing a deploy to Salesforce (my first) via; Right-click; "Deploy to Org" on a single file. There's no useful output to say what's going on.
14:35:11.774 Starting SFDX: Deploy Source to Org
I've read elsewhere that Salesforce can be exceptionally slow when it comes to deployment, but ten minutes (and counting) to deploy a single file seems very slow indeed. Is there a way to debug into what's happening, or is it just a black box?

Production or Sandbox? "Normal" or "source tracked" (like scratch org for example). Do you have "Output" view / tab / thing at the bottom of your VSCode where you should normally see command results (slightly different from "Terminal").
If you login to the target org and setup -> monitor deployments do you even see your attempt? If you deploy to prod by default it'll run all tests which in complex orgs may be expensive, take 1h for example.
You may get results faster if you run just 1 test related to the class you're deploying - but for that you need to whip out some script-fu.
sfdx force:source:deploy -u mytargetuser -p "force-app/main/default/classes/AccountTriggerHandler.cls" -l RunSpecifiedTests -r "AccountTriggerHandlerTest" --verbose --loglevel fatal -c

Related

How to deploy SQL Server Express on Docker Desktop Kubernetes

I've been studying "Kubernetes Up and Running" by Hightower et al (first edition) Chapter 13 where they discussed creating a Reliable MySQL Singleton (Since I just discovered that there is a second edition, I guess I'll be buying it soon).
Using their MySQL reliable singleton example as a model, I've been looking for some sample YAML files to make a similar deployment with Microsoft SQL Server (Express) on Docker Desktop for Kubernetes.
Apparently I need YAML files to deploy
Persistent Volume
Volume claim (should this be NFS?)
SQL Server (Express edition) replica set (in spite of the fact that this is just a singleton).
I've tried this example but I'm confused because it does not contain a persistent volume & claim and it does not work. I get the error
Error: unable to recognize "sqlserver.yml": no matches for kind "Deployment" in version "apps/v1beta1"
Can someone please point me to some sample YAML files that are not Azure specific that will work on Docker Desktop Kubernetes for Windows 10? After debugging my application, I'll want to deploy this to Azure (AKS).
Wed Jul 15 2020 Update
I left out the "-n namespace" for the helm install command (possibly because I'm using Helm and you are using helm v2?).
That install command still did not work. Then I did a
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
Now this command works:
helm install todo-app-database stable/mssql-linux
Progress!
When I do a "k get pods" I see that my todo-app-mssql-linux database is in the pending state. So I did a
kubectl get events
and I see
Warning FailedScheduling pod/todo-app-database-mssql-linux-8668d9b88c-lsh5l 0/1 nodes are available: 1 Insufficient memory.
I've been google searching for "Kubernetes insufficient memory" and can find no match.
I suspect this is a problem specific to "Docker Desktop Kubernetes".
When I look at the output for
helm -n ns-todolistdemo template todo-app-database stable/mssql-linux
I see the deployment is asking for 2Gi. (Interesting: when I use the template command, the "-n ns-todolistdemo" does not cause an error like it does with the install command).
So I do
kubectl describe deployment todo-app-database-mssql-linux >todo-app-database-mssql-linux.yaml
I edit the yaml file to change 2Gi to 1Gi.
kubectl apply -f todo-app-database-mssql-linux.yaml
I get this error:
error: error parsing todo-app-database-mssql-linux.yaml: error converting YAML to JSON: yaml: line 9: mapping values are not allowed in this context
Hmm... that did not work. I try delete:
kubectl delete deployment todo-app-database-mssql-linux
kubectl create -f todo-app-database-mssql-linux.yaml
I get this error:
error: error validating "todo-app-database-mssql-linux.yaml": error validating data: invalid object to validate; if you choose to ignore these errors, turn validation off with --validate=false
So I try apply:
kubectl apply -f todo-app-database-mssql-linux.yaml
Same error!
Shucks.... Is there a way to adjust the memory allocation for Docker Desktop?
Thank you
Siegfried
Short answer
https://github.com/helm/charts/blob/master/stable/mssql-linux/templates/pvc-master.yaml
Detailed Answer
Docker For Desktop comes already with a default StorageClass :
This storage class is responsible for auto-provisioning of PV whenever you create a PVC.
If you have a YAML definition of PVC (persistent volume claim), you just need to keep storageClass empty, so it will use the default.
k get storageclass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 11d
This is fair enough as Docker-For-Desktop Cluster is a one node cluster. So if your DB crashes and the cluster opens it again , it will not move to another node, because simply, you have a single node :)
Now should write the YAML of PVC from scratch ?
No , you don't need. Because Helm should be your best friend.
( I explained below Why you have to use Helm even without deep learning curve)
Fortunately, the community provides a chart called stable/mssql-linux..
Let's run it together :
helm -n <your-namespace> install todo-app-database stable/mssql-linux
# helm -n <namespace> install <release-name> <chart-name-from-community>
If you want to check the YAML (namely PVC) that Helm computed, you can run template instead of install
helm -n <your-namespace> template todo-app-database stable/mssql-linux
Why I give you the answer with Helm ?
Writing YAML from scratch lets reinventing the wheel while others do it.
The most efficient way is to reuse what community prepare for you.
However, you may ask: How can i reuse what others doing ?
That's why Helm comes.
Helm comes to be your installer of any application on top of kubernetes regardless how much YAML does your app require.
Install it now and hit the ground choco install kubernetes-helm

Deploying Haskell yesod docker container on google app engine

I am trying to upload a yesod Docker container on Google App Engine. The source code is here and the Docker image is here.
I followed the documentation in the Custom runtime quickstart, and when invoking gcloud app deploy the app builds fine after increasing the build timeout, but the container either the readiness check when trying to start or shows the following timeout message:
ERROR: (gcloud.app.deploy) Operation [apps/meeshkan-github-webhook-router/operations/xxxx-xxxx-xxxx] timed out. This operation may still be underway.
I have tried experimenting with several things, including a manual readiness check, creating an /_ah/health endpoint, and increasing the timeout of the readiness check all the way to 1799 seconds, but none of these actions seem to work.
One issue may be the size of the container (it is 3.2gb), and I could try to prune it down, but I'd only do that if someone could confirm that container size is a contributing factor to deployment problems. Other than that, I'm not sure what could be causing this failure. The docker image starts fine on our local machines.
Thanks in advance for your help and suggestions!
The issue turned out to be that, because I was building on Windows, images built using Docker Desktop on Windows gave all shell scripts executable permission automatically, whereas Docker on Linux needs shell scripts to be given the executable permission. By adding this line to my Dockerfile:
RUN chmod +x /usr/src/app/run.sh
Everything worked fine!

DataPusher Production Deployment on CKAN 2.8 - OperationalError: (sqlite3.OperationalError) attempt to write a readonly database

I followed the official doc for deploying DataPusher to a production environment (https://docs.ckan.org/projects/datapusher/en/latest/deployment.html) but am getting OperationalError: (sqlite3.OperationalError) attempt to write a readonly database [SQL: u'INSERT INTO jobs...
I recognize this error is more to do with SQLAlchemy, namely the apache2 www-user user does not have permission to write to the SQLite database that the wsgi DataPusher app uses to keep track of jobs. I have limited experience with wsgi apps so I'm not really sure where to start with debugging...
I followed the official documentation to a T, however it's worth noting that I am replacing this with what was a functioning DataPusher development installation on the same server. I believe I have removed everything related to the development install.
Also worth noting is that /usr/lib/ckan/default points to /home/ubuntu/ckan/lib/default for an inexplicable reason. Also I believe this was a source install rather than a package install (hence my needing to deploy DataPusher).
I have tried adapting the documentation for the ckan directory being in /home/ubuntu, however I don't think this should matter since /usr/lib/ckan/default still points to the same effective location.
You'll need to chmod 777 (or the like) the sqlite (.db) file that is specified in the /etc/ckan/datapusher_settings.py file.
For example, the datapusher_settings.py file has the line for SQLALCHEMY_DATABASE_URI that points to /tmp/job_store.db by default.
So sudo chmod 777 /tmp/job_store.db. I don't think it's much a concern to 777 this particular file, but it's likely a less-open value will suffice (e.g. 655). It may be best to chown to apache since that is the only thing that should be accessing this sqlite file, much less editing it.
No issues from ckan being installed in a non-standard directory (in this case, since /usr/lib/ckan/default still directs to the ckan directory).

Ansible Issue - [Errno 2] No such file or directory

I've been holding off posting here because I feel like this issue could be too vague. I will try my best to explain. I have been through all of the existing questions but they don't seem relevant to what I am doing.
Basically, I have inherited 3 Ec2 Instances that are Dev / Staging / Live web applications in my new role. I use Ansible playbooks to migrate the Database between all environments. We recently had a new website that was deployed onto all three existing instances.
The Dev box recently died so I blew it away and launched a new one, the website looks fine, however exporting and importing the Database no longer works (on the new instance)
Below is the Ansible output:
TASK: [Export database to migrate] ********************************************
failed: [172.**.**.***] => {"changed": true, "cmd": "wp db export dbv2.sql --tables=t*******0_links,t*******0_options,t*******0_postmeta,t*******0_posts,taxlt4ws0_rg_form,taxlt4ws0_rg_form_meta,taxlt4ws0_rg_form_view,t*******0_term_relationships,t*******0_term_taxonomy,t*******0_termmeta,t*******0_terms,t*******0_usermeta,t*******0_users", "delta": "0:00:00.001594", "end": "2017-09-01 10:21:25.225355", "rc": 127, "start": "2017-09-01 10:21:25.223761", "warnings": []}
stderr: /bin/sh: 1: wp: not found
FATAL: all hosts have already failed -- aborting
Things I've checked:
Chmod on the folders it import/exports in/from.
IAM Role is set
Used Shell instead of Command in the Playbook
Configs for each environment
I'm really stumped my Ansible knowledge is quite limited as I only picked it up a couple months ago and hadn't run into any issues (even with a new Website) until the Dev box had to be replaced.
I think ansible is referring to wpcli. It is not able to find its executable.
If this is the case,you need to install it with another task before that one.
Basically what this is complaining about is that whatever script you are using in module Export DB is not able to find a wp script or executable.
stderr: /bin/sh: 1: wp: not found
Would recommend checking which wp or maybe do a find to see if it is on the staging or live instances to see what it is and install/copy it over to the Dev instances.
You can test this hypothesis by using a small test script:
#!/bin/sh
wp
create this script say test.sh, give it executable permissions and run it on all the env's to see where it fails.

Supplying build info as qx.core.Environment entries

I have my qooxdoo project built and deployed by a CI server. Upon build, the server generates build info (version, VCS revision, CI build number, timestamp) that I would like to be passed to my qooxdoo app as qx.core.Environment keys.
At the moment, I have CI server generate a build.json file which is packaged together with the application, loaded at startup and converted to environment keys (by application code). This costs us an extra XHR.
On the other hand, I know that environment entries can be supplied during build, via config.json. Of course our build system can preprocess config.json to fill in environment entries, but I'm a bit skeptic of the idea of CI server fiddling with config.json. Is there any better solution? Is it possible to make generator script read environment entries from some auxiliary source?
I would write a #VERSION# tag into my script and at the end of the build process just search and replace this string in the compiled js file.
perl -i -p -e 's/#VERSION#/0.3.0/g' build/script/hello.js

Resources