SOLR 9 - Delete Schema from Schema Designer - solr

I am trying to delete a schema from SOLR 9. The schemas were created as part of following the tutorials: https://solr.apache.org/guide/solr/latest/getting-started/tutorial-films.html
The schema designer doesn't seem to have an option to delete:
https://solr.apache.org/guide/solr/latest/indexing-guide/schema-designer.html
I have tried deleting the core by executing the command:
C:\solr-9.0.0>bin\solr.cmd delete -c <core name>
but the schema still stays.
Can anyone advise how to delete it?

Thought to answer this question lately while I was looking for the answers as well. After a few searches I figured out below steps to remove the config(s):
Install Elkozmon ZooNavigator (https://zoonavigator.elkozmon.com/en/latest/ Web UI for ZooKeeper): Since, I have my Zookeeper running locally outside docker container, I had to start the ZooNavigator disabling the network mode.
docker run \
-d -p 9000:9000 \
--name zoonavigator \
--restart unless-stopped \
elkozmon/zoonavigator:latest
Access ZooNavigator at http://localhost:9000/connect and provide:
Connection String: host.docker.internal:2181. Since, I have my setup on WindowsOS with Docker v18.03+, I used the special DNS name host.docker.internal which will resolve to the internal IP address used by the host.
Leave "Auth Username" & "Auth Password" empty and click connect.
On the left side navigation window, navigate into solr/configs location and you will see all the configs available including the ones created through Solr schema designer console. Click on the 3 vertical dots against the config for an option to delete the config. Delete the ones you are looking for and you are done!
Hope this helps!

Related

How to deploy SQL Server Express on Docker Desktop Kubernetes

I've been studying "Kubernetes Up and Running" by Hightower et al (first edition) Chapter 13 where they discussed creating a Reliable MySQL Singleton (Since I just discovered that there is a second edition, I guess I'll be buying it soon).
Using their MySQL reliable singleton example as a model, I've been looking for some sample YAML files to make a similar deployment with Microsoft SQL Server (Express) on Docker Desktop for Kubernetes.
Apparently I need YAML files to deploy
Persistent Volume
Volume claim (should this be NFS?)
SQL Server (Express edition) replica set (in spite of the fact that this is just a singleton).
I've tried this example but I'm confused because it does not contain a persistent volume & claim and it does not work. I get the error
Error: unable to recognize "sqlserver.yml": no matches for kind "Deployment" in version "apps/v1beta1"
Can someone please point me to some sample YAML files that are not Azure specific that will work on Docker Desktop Kubernetes for Windows 10? After debugging my application, I'll want to deploy this to Azure (AKS).
Wed Jul 15 2020 Update
I left out the "-n namespace" for the helm install command (possibly because I'm using Helm and you are using helm v2?).
That install command still did not work. Then I did a
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
Now this command works:
helm install todo-app-database stable/mssql-linux
Progress!
When I do a "k get pods" I see that my todo-app-mssql-linux database is in the pending state. So I did a
kubectl get events
and I see
Warning FailedScheduling pod/todo-app-database-mssql-linux-8668d9b88c-lsh5l 0/1 nodes are available: 1 Insufficient memory.
I've been google searching for "Kubernetes insufficient memory" and can find no match.
I suspect this is a problem specific to "Docker Desktop Kubernetes".
When I look at the output for
helm -n ns-todolistdemo template todo-app-database stable/mssql-linux
I see the deployment is asking for 2Gi. (Interesting: when I use the template command, the "-n ns-todolistdemo" does not cause an error like it does with the install command).
So I do
kubectl describe deployment todo-app-database-mssql-linux >todo-app-database-mssql-linux.yaml
I edit the yaml file to change 2Gi to 1Gi.
kubectl apply -f todo-app-database-mssql-linux.yaml
I get this error:
error: error parsing todo-app-database-mssql-linux.yaml: error converting YAML to JSON: yaml: line 9: mapping values are not allowed in this context
Hmm... that did not work. I try delete:
kubectl delete deployment todo-app-database-mssql-linux
kubectl create -f todo-app-database-mssql-linux.yaml
I get this error:
error: error validating "todo-app-database-mssql-linux.yaml": error validating data: invalid object to validate; if you choose to ignore these errors, turn validation off with --validate=false
So I try apply:
kubectl apply -f todo-app-database-mssql-linux.yaml
Same error!
Shucks.... Is there a way to adjust the memory allocation for Docker Desktop?
Thank you
Siegfried
Short answer
https://github.com/helm/charts/blob/master/stable/mssql-linux/templates/pvc-master.yaml
Detailed Answer
Docker For Desktop comes already with a default StorageClass :
This storage class is responsible for auto-provisioning of PV whenever you create a PVC.
If you have a YAML definition of PVC (persistent volume claim), you just need to keep storageClass empty, so it will use the default.
k get storageclass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 11d
This is fair enough as Docker-For-Desktop Cluster is a one node cluster. So if your DB crashes and the cluster opens it again , it will not move to another node, because simply, you have a single node :)
Now should write the YAML of PVC from scratch ?
No , you don't need. Because Helm should be your best friend.
( I explained below Why you have to use Helm even without deep learning curve)
Fortunately, the community provides a chart called stable/mssql-linux..
Let's run it together :
helm -n <your-namespace> install todo-app-database stable/mssql-linux
# helm -n <namespace> install <release-name> <chart-name-from-community>
If you want to check the YAML (namely PVC) that Helm computed, you can run template instead of install
helm -n <your-namespace> template todo-app-database stable/mssql-linux
Why I give you the answer with Helm ?
Writing YAML from scratch lets reinventing the wheel while others do it.
The most efficient way is to reuse what community prepare for you.
However, you may ask: How can i reuse what others doing ?
That's why Helm comes.
Helm comes to be your installer of any application on top of kubernetes regardless how much YAML does your app require.
Install it now and hit the ground choco install kubernetes-helm

Plesk Git Deploy not running "additional deployment actions"

Plesk Obsidian offers GIT deployment and we are trying to configure this to work similarly to our previous configuration on CPANEL (we recently upgraded from a shared account with CPANEL to a VPS with Plesk - Plesk so that we can use Docker later on).
Here are the details on exactly how we access the GIT configuration on our Plesk panel:
On Plesk Obsidian (Web Pro Edition / Reseller) we access the GIT configuration via:<br>
--> DOMAINS (left panel menu) <br>
--> locate desired domain and MANAGE IN CUSTOMER PANEL <br>
--> open the accordian drop-down for the domain <br>
--> Git (under DevTools)
--> (in Git, under DevTools) locate desired repo <br>
--> REPOSITORY SETTINGS link.
The folder structure on the VPS is not optimal, so we attempted to use the MANUAL DEPLOYMENT radio button, under the REPOSITORY SETTINGS link and configure some post-deployment actions - but nothing happens.
In the end, just to prove to ourselves that the manual deployment actions worked, we replaced everything we had tried with just this one line:
/usr/bin/touch ./work4me.pls
And then searched the file system to see if this file had been created, anywhere. No joy here either (we could not find the file).
Does anyone have any suggestions/ideas on what else to try?
Has anyone used this feature successfully (i.e. is it probably a configuration problem on our VPS)?
If the above touch command had worked, where should we be looking for the work4me.pls file?
You can use
touch ~/work4me.pls
This will create the file on the home directory. I have also tested with other commands as removing contents under a directory and copy them.
rm -r ~/folder/*
cp -a ~/source-folder/. ~/folder/
Hope this helps! 💪

Packaging DX Project and Getting Error "The Default Workflow User must be set before activating this workflow rule"

I am able to deploy to scratch org using "features": ["DefaultWorkflowUser"] in my project-scratch-def.json but I am not able to package it using sfdx force:package:version:create -p "MyAlias" -k MyPassword -w 10
The only error I am receiving is The Default Workflow User must be set before activating this workflow rule on 5 different workflows. I do not see anything that I can pull in the manifest to fix this. How do I overcome this so that I can package it?
I got it working. We need to add config file in package creation commmand like below -
sfdx force:package:version:create --package <Package_name> -k <Key> -f .\config\project --wait 30
Include the following line in your project-scratch-def.json
"features": ["DefaultWorkflowUser"]
This will set the default workflow user as admin in the scratch org while creating package version. See this link

IBM Cloud Private-Community Edition - Waiting for cloudant database initialization

I tried below command
docker run --rm -t -e LICENSE=accept --net=host -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0 install
the response is
Waiting for cloudant initialization
I entered the command received the logs shown in the image. No error shown. Please give a solution
From the error message, for cloudant database initialization issue, it may be caused by the cloudant docker image is pulled from dockerhub while ICP installation. The cloudant docker image is big, you can run below command to check whether the image is ready in your environment.
$ docker images | grep icp-datastore
If the cloudant docker image is ready in your environment, and the ICP installation still has cloudant database initialization issue, you can try to install the latest ICP 2.1.0.3 Community Edition. From 2.1.0.3, ICP removes the cloudant database. The ICP 2.1.0.3 installation documentation:
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0.3/installing/install_containers_CE.html
If you still want to check the cloudant database initialization issue in ICP 2.1.0.1 environment, you can:
Ensure your ICP nodes match the system and hardware requirements firstly.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/supported_system_config/system_reqs.html
Let us know the ICP installation configurations. You can check the contents for config.yaml and hosts files.
Check the system logs (in /var/log/messages or /var/log/syslog file) to find the relevant errors.
Run 'docker logs ' command to check the logs or errors.

Easy way to push postgres db to heroku in Win7? problems with db:pull and pg:transfer

Using Rails 3.2.2, finishing up my migration from sqlite to postgres 9.2.
Used answer in this tutorial as a guide to install postgres and got stuck on Step 11 where it asks run heroku db:pull where I get:
Failed to connect to database: Sequel::AdapterNotFound -> LoadError: cannot load such file --pg
I dug deeper and found db:pull (taps gem) is deprecated and came across a few recommendations for pg:transfer. Installed pg:transfer, but I get the impression it may be *nix only(?) as if I run: heroku pg:transfer it returns:
Heroku client internal error. No such file or directory - .env (Errno:ENOENT)
If I do pg:transfer with -f and -t it gives me:
'env' is not recognized as an internal or external command, operable program or batch file which means it isn't bound to path or doesn't exist as a command in windows.
Any thoughts on above errors?
Resolved by using pg:backups gem, which was recommended as the replacement for taps in the Heroku docs. I used this guide and uploaded my dump to dropbox for Heroku to pick it up.
Here's my exact list of steps and cmds:
Added pgbackups from heroku.com add-ons to my instance.
heroku pgbackups:capture DATABASE (this just backs up your heroku db)
pg_dump -h localhost -U <pg username> -Fc dbname > dbname.dump
Moved dbname.dump into a folder on my dropbox
In Dropbox, right-click on dbname.dump => "Share link"
Cancel the sharing dialogue pop-up, right-click on "Download button", Copy Link Address (Chrome)
heroku pgbackups:restore DATABASE <paste dropbox download link here>
Dropbox trickiness: don't use the file link provided by Dropbox since it's an html redirect and will cause pg:restore to fail, even though the extension ends in .dump
Instead, navigate to your dropbox page and "right-click copy link address" on the Download button. That's the address you use in your pgbackups:restore (should be something like db.dump?token=<long random string>)
A bit clunky, but got the job done. If you know a better way please let me know!
You need to make a .env file containing something like:
DATABASE_URL=postgres://localhost/myapp_development
References:
https://github.com/ddollar/heroku-pg-transfer
https://devcenter.heroku.com/articles/config-vars#local-setup

Resources