sonarqube mssql packet size - sql-server

I'm using sonarqube 6.1 with MS SQL Server. I tried running a scan on an app that seems to be generating lots and lots of issues. but at the end of the process, it throws the error below:
INFO: CPD calculation finished
INFO: Analysis report generated in 8297ms, dir size=8 MB
INFO: Analysis reports compressed in 1314ms, zip size=2 MB
INFO: ----------------------------------------------------------------------
INFO: EXECUTION FAILURE
INFO: ----------------------------------------------------------------------
INFO: Total time: 1:37.158s
INFO: Final Memory: 50M/481M
ERROR: Error during SonarQube Scanner execution
org.sonarqube.ws.client.HttpException: Error 500 on
http://<server>/api/ce/submit?projectKey=mykey&projectName=pname :
{"errors":[{"msg":"Fail to insert data of CE task somerandomString"}]}
Throughout the Logs, I see some errors parsing some of my css files (which I'm yet to check what's going on) but don;t think that's related. Just mentioning in case it does have something to do with it.
I tried changing my connection string to include Packet Size=65536, but because of the space, not sure if I'm setting it correctly.
I tried
sonar.jdbc.url="jdbc:sqlserver:...;Packet Size=65536"
that blows up and sonar does not start.
Also Tried
sonar.jdbc.url=jdbc:sqlserver:...;"Packet Size=65536" and <BR>
sonar.jdbc.url=jdbc:sqlserver:...;Packet Size=65536
sonar starts, but same error every time.
First of all, is my error realy related to packet sizes? if yes, what's the right way of changing that for sql server? if not, what else can be going on here?
thanks,

Thanks G. Ann.
The root of my problems were on the server logs!

Related

Topaz Workbench error message: 92003: An error occurred while writing data to the channel

While Topaz Workbench is starting the error message "92003: An error occurred while writing data to the channel" shows up. That is making one of my configured repository to not be listed there. How to resolve it?
Message:
What resolved my case, as a workaround, was to verify the amount of fa_commgr.exe processes in the Task Manager (I am using Windows 10), kill all of them, and then start Topaz again, this way it started normally (without that error message), and my repository was listed again.
Task Manager and fa_commgr.exe process instance (there were 4 instances I don't know why... yet):
My repository was listed again:

chilkat - problems - sql server - download and unzip in memory without any zip file created

I'm testing the chilkat example code: (SQL Server) Download a Zip from a URL and OpenFromMemory. (No .zip fie is created) / https://www.example-code.com/sql/zip_openFromMemory.asp . But I only get an errormessage telling me:
Failed to find end-of-central-directory-record.
Failed to get central dir locations.
Does anyone get any solutions to this problem?
full ChilkatLog:
OpenFromMemory:
DllDate: Oct 28 2019
ChilkatVersion: 9.5.0.80
UnlockPrefix: XXXXXXXXXXX
Architecture: Little Endian; 64-bit
Language: ActiveX / x64
VerboseLogging: 0
Component successfully unlocked using purchased unlock code.
oemCodePage: 850
openFromMemData:
Failed to find end-of-central-directory-record.
Failed to get central dir locations.
--openFromMemData
Failed.
--OpenFromMemory
--ChilkatLog
I modified the example to avoid the problem. Also, it should perform better because it won't try to pass the actual binary data across COM boundaries.
Instead, it just uses the Chilkat BinData object. (We pass the reference to the BinData object instead of the data itself.)
Please go to https://www.example-code.com/sql/zip_openFromMemory.asp to see the changes. (Refresh the page if needed.)
Let me know if that helps.
Chilkat has responded to my support inquiry and made changes to the example script!
Thanks for the quick response!

sybase interactive sql initialization error (java path)

Getting this error, from the looks of which it's a missing classpath.
Trying to force the classpath via setting it like below doesn't help.
set CLASSPATH=C:\Sybase\Shared\SAPJRE-8_1_008_64BIT\lib
This is a relatively fresh installation of Sybase IQ, and I'm trying to run the Interactive SQL from the program list.
C:\Sybase\IQ-16_1\Bin64>dbisql.com
Error occurred during initialization of VM
java/lang/NoClassDefFoundError: java/lang/Object
[Last 4000 events in the event buffer]
<thread> <time> <id> <description>
23548 0.00 0x00000001 Creating red and yellow zone [0x0000008890e00000,0x0000008890e04000]
Aborting ...
Curious if there's a way to debug it through an elevated verbose level - to see which class is not being found. Very odd that a new installation would do that. Windows 10 environment, compatible with IQ 16.x
I do not recognize that exactly error, but had others jvm issues with Sybase Interactive SQL, changing the JVM path in dbisql.ini fixed it (ini file located in same folder as dbisql.com).
That type of error seems to be related with incorrect java assumptions. Until java 8 objects in JVM were packed a certain wait that changes on 9.
Alternatively one can use another database cliente software.

deja vu: Why am I seeing "the application was unable to start correctly (0xc000007b)"

Platform: Windows 10 Pro 64-bit [Version 10.0.15063]; Xeon CPU E3-1220 v3, 3101 Mhz, 4 Core(s), no HT; 32 GB ECC memory; Visual Studio 2017 Community Edition.
I am working on NTPD, the Network Time Protocol Daemon, trying to make it more accurate.
My revised version of NTPD ran from 3/16/17 until yesterday, 8/20/17. I finally got my readTemps.exe C# program working yesterday; all it does is read the CPU and System temperatures every 10 seconds, and waits for NTPD to ask for them via UDP. NTPD starts readTemps.
However, there was a problem in that NTPD could not start readTemps as Administrator, which is needed, I believe, for readTemps to interface with WMI. So I put code in readTemps to detect if it is running as Administrator, and if not restart itself thusly.
In test mode, a driver program can start readTemps, access temperatures for a minute, and stop readTemps, but NTPD can no longer do anything!.
At first it stopped in the very beginning when it was trying to request privileges to set the system time (which is not my code and has worked forever).
Now it cannot even load, failing with the error "(t)he application was unable to start correctly (0xc000007b)."
Using my backup program I replaced the newly compiled versions of ntpd.exe and ntpd.lib with the versions that had run successfully from 3/16--8/20/17, but still I get the (0xc000007b) error.
I downloaded Dependency Walker (DW) and ran it on the ntpd.exe image that worked for 5 months. DW found the following errors:
Error: At least one required implicit or forwarded dependency was not
found.
Error: At least one module has an unresolved import due to a
missing export function in an implicitly dependent module.
Error:Modules with different CPU types were found.
Error: A circular dependency was detected.
Warning: At least one delay-load dependency module was not found.
Warning: At least one module has an unresolved import due to a
missing export function in a delay-load dependent module.
In particular, DW finds that dozens of dll's are missing, e..g.,
API-MS-WIN-CORE-REGISTRY-L1-1-1.DLL
API-MS-WIN-CORE-REGISTRY-L2-2-0.DLL
API-MS-WIN-CORE-RTLSUPPORT-L1-2-0.DLL
API-MS-WIN-CORE-SIDEBYSIDE-L1-1-0.DLL
API-MS-WIN-CORE-STRING-L2-1-0.DLL
etc., etc., etc
.
I can't find these dll's either using the dir /S command, not on C:, the system disk, and not on K:, where Visual Studio 2017 Community Edition is installed.
Does anyone have any idea what could possibly be wrong?

Gatling vs httperf

I am trying to stress test one simple server with 10K connections per seconds; it's pretty dummy server so this should be possible.
When I run gatling, the best I can get is 7K, at 8K we start to receive connection errors. This is again a simple test, ramping to 8k and holding the traffic for 2 minutes.
Request 'Home' failed: java.net.ConnectException: Cannot assign requested address
I know this error is related to tuning our box (open files etc). I have tried some commands, but that didn't help much.
Anyway, when I run a simple burst test with httperf, I easily get 10K without any errors. Command line:
httperf --uri / --server cloud-10-0-20-35 --port 8080 --num-conns=500000 --rate 10000
Im on Centos 6.x VM box.
Why httperf is working different? I know its a native tool, but why such big difference? Any ideas? I am aware that this is more related of java infrastructure, then to Gatling itself - that is awesome tool.

Resources