If I have my v1.war file deployed on Wildfly 10 final, is it possible to deploy my v2.war and make Wildly automatically switch to the newer one without the need for a reboot?
Related
My setup is 2 systems: 1 Local system 1 Remote system. Local is running windows 11 and remtoe is running Windows 10
So, I've recently ran out of space on the remote system. After investigating I found that C:\Users<user>\AppData\Local\Temp was using over 80 gigs of space!
After further investigation I found folder after folder with 'vscode-server-win32-x64' inside.
After doing some looking, it seems like whenever I am connecting via VS Code remote SSH server, the system is downloading and attempting to install the vscode server. Which seems to lead to every connection attempt downloading a new copy into that temp folder.
Has anyone seen this or have any idea waht's going on?
I have postgresql 9.3 installed.
I would like to have also postgres 9.6.1 installed.
Each application is using a different DB. Most of the times I don't run both applications, so I don't need them to run concurrently.
I downloaded the installer recommended by postgres, and installed 9.6.1, but then it seems that 9.3 is not able to start anymore. I'm getting an error trying to run sudo service postgres start:
Starting PostgreSQL 9.3 database server
The PostgreSQL server failed to start. Please check the log output.
The log file is empty (not sure that's the interesting one) - /var/log/postgresql/postgresql-9.3-main.log
Any idea how to be able to run both instances?
You need to check the postgresql.conf config file.
If you want to run both instances at the same time then they will need to be run on different ports otherwise they will conflict. The default is 5432, change this for one of the DB's.
Then make sure that the data directory, log file are unique for each instance.
I've recently upgraded my Xampp environment to v3.2.2 of the software. I have the same versions running on 2 different machines using the same configuration for both environments.
By the same configuration, I mean that I got the software working the way I wanted on one machine, shut down Xampp, zipped up the folder, and copied that folder over to the other machine where it started right up. I do not use the Xampp installer, I download the full zip file.
Here is a list revised configuration files for both machines:
paths are relative to Xampp folder.
apache/conf/httpd.conf
apache/conf/extra/httpd-xampp.conf
php/php.ini
php/pear.bat
php/pciconf.bat
php/bin/my.ini
phpMyAdmin/config.inc.php
setup_xampp.bat
The issue I'm having is that the same database that imports through the same version of phpMyAdmin works on one machine and not the other.
To no avail, I tried adding this line of code to the phpMyAdmin config file.
$cfg['ExecTimeLimit'] = 0;
That was recommended in this thread:
Giving script timeout passed on database import
I'm running 2 versions of Xampp. That is all working fine. Interestingly, the same database will import on the offending machine without timing out on Xampp v3.2.1. I use the same configurations for that version of Xampp too.
Could this have something to do with MariaDB? The older version of Xampp used mySQL.
I can work around the issue, for now, but I'd sure appreciate solving the mystery of why I can import on one machine and not the other.
Thanks, in advance, for any helpful replies.
I have an application which uses the OCI 7 API. This application is successfully deployed on a variety of configurations like
WS2003/Oracle9 and WS2008 R2/Oracle12 (r1). I am now trying to deploy the app on WS2012 R2 but I am facing a frustrating issue where the application crashes with an illegal access somewhere in oranls12.dll. This makes think that it has something to do with the locale and/or system variables. I have checked that the NLS_LANG system variable is set as the same the database uses which is AMERICAN_AMERICA.WE8MSWIN1252.
I have tried using the binary which I know works on WS2008, and I have also compiled it on WS2012. It still crashes. Does anyone know what is wrong, or have any pointers on how to debug this properly?
Any details needed I will provide.
To answer the comments below, the app uses the OCI 7 API, which is still provided with the newer drivers. The app itself is compiled against OCI 12. The database running on the server is Oracle 12.1.0.1.
I feel I'm doing something wrong with the ArangoDB upgrade process. The end result from the upgrade is that my databases exist, my users exist, my collections exist, but there are no documents in my collections. Obviously this is an issue. I've had this problem occur twice, upgrading from 2.3.1 -> 2.3.4, and 2.3.4 -> 2.4 in Windows. I used the same procedure in both cases:
Stopped the ArangoDB service
Made a backup copy of my ArangoDB directory from Program Files
Installed the new version of ArangoDB
Copied the contents of the database folder from the old ArangoDB directory to the new one, excluding the system database (I feel like this is where I go wrong...)
Then I open a command prompt to the bin directory and run arangod --upgrade
The upgrade output seems right to me, it finds the old databases and upgrades them, which is evident by the fact that they exist, along with the collections. But as stated before the collections are all empty. Thankfully this has been in a dev environment, but I worry about upgrading my production environment. Am I doing something wrong or is this a bug?
I've tried to reproduce this with the step 2.3.5 to 2.4.1 using the x64 Arango packages
What I did:
First, ran arangod from the shell with its own database directory outside of the program directory:
bin\arangod.exe c:\ee --console
Created a collection, inserted data (like the js/server/tests/aql-optimizer-rule-use-index-for-sort.js setUp()-function does)
then installed the new version, ran
bin\arangod.exe c:\ee --upgrade
then
bin\arangod.exe c:\ee --console
AQL_EXECUTE("for u in UnitTestsAqlOptimizeruse_index_for_sort_XX return u")
Which gave me all 100 documents which I put into the collection.
Next I tried with running the arangod service, with the var\lib folder inside of the Porgram Files folder. I connected using arangosh, inserted the documents into the collection again, verified with
db._query("for u in UnitTestsAqlOptimizeruse_index_for_sort_XX return u").toArray();
that all data was there.
Then stopped the service, installed 2.4.1, stopped the service, and used explorer to copy over the ArangoDB 2.4.1\var\lib directory, run the arangod --upgrade with success restarted the service, and used arangosh to successfully revalidate the collection and its documents again.
So, as this seems similar to what you did, can you try to reproduce this with a minimal set of data and send us your var\lib directory?
As it turns out the problem was related to replication. I would replicate data from the production db to use during development. Then when I would upgrade or stop the Arango service on the dev db all the documents would vanish. BUT when I used arango backup and restore to copy the production DB data, everything worked as expected. The newest version of Arango is supposed to have fixed the issue, but I haven't had any time to test it.