xdebug can't collect trace output - xdebug

I'm using xdebug to analysis my website performance .
here is my php.ini config informations:
zend_extension = "C:\xampp\php\ext\php_xdebug.dll";
xdebug.remote_enable = true;
xdebug.remote_host = 127.0.0.1;
xdebug.remote_port = 9000 ;
xdebug.profiler_enable = on;
xdebug.trace_output_dir = "C:\xdebug\trace";
xdebug.profiler_output_dir = "C:\xdebug\profiler";
xdebug.auto_trace = On;
xdebug.show_exception_trace = On;
xdebug.remote_autostart = On;
xdebug.collect_vars = On;
xdebug.collect_return = On;
xdebug.collect_params = On;
xdebug.show_local_vars = On;
xdebug.default_enable = On;
xdebug.remote_handler = dbgp;
xdebug.max_nesting_level = 10000;
In this situation, I can link phpstrom 9.0 to debug my PHP application .
But when I want to collect trace data in local file, apache will return http 502 error to browser and I don't know why this happen.

Related

Azure SQL Databricks Integration

These are my JDBC connection details:
jdbcHostname = "ss-owaisde.database.windows.net"
jdbcPort = 1433
jdbcDatabase = "database-owaisde"
jdbcUsername = "owaisde"
jdbcPassword = "******"
jdbcDriver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbcUrl = f"jdbc:sqlserver://{jdbcHostname}:{jdbcPort};databaseName=
{jdbcDatabase};user{jdbcUsername};password={jdbcPassword};driver={jdbcDriver}"
Executing the spark read
df1 = spark.read.format("jdbc").option("url",jdbcUrl).option("dbtable",
"SalesLT.Product").load()
Getting the following error on databricks
java.sql.SQLException: No suitable driver
I tried to replicate your issue with your code:
I got same error:
As per my knowledge in your case URL is not build in correct format. I tried with below code:
jdbcHostname = "<servername>.database.windows.net"
jdbcPort = 1433
jdbcDatabase = "<dbname>"
jdbcUsername = "<username>"
jdbcPassword = "<password>"
jdbcDriver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
#url = s"jdbc:sqlserver://${database_host}:${database_port}/${database_name}"
table = "Student"
jdbcUrl = f"jdbc:sqlserver://{jdbcHostname}:{jdbcPort};databaseName={jdbcDatabase}"
df1 = spark.read.format("jdbc").option("driver", jdbcDriver).option("url", jdbcUrl).option("dbtable", table).option("user", jdbcUsername).option("password", jdbcPassword).load()
Dataframe created successfully.
It worked fine for me, check from your end.

Import data from MS SQL Server to HBase with Flume

I'm really new to Flume. I prefer Flume than Sqoop because data is continued to be imported to MS SQL Server in my case, therefore I think Flume is a better choice which is able to transfer data in real time.
I just followed some online example and then editing my own flume config file which tells something about the source, channel, and sink. However, it seemed that Flume didn't work successfully. There was no data being transferred to HBase.
mssql-hbase.conf
# source, channel, sink
agent1.sources = src1
agent1.channels = ch1
agent1.sinks = sk1
# declare source type
agent1.sources.src1.type = org.keedio.flume.source.SQLSource
agent1.sources.src1.hibernate.connection.url = jdbc:sqlserver://xx.xx.xx.xx:1433;DatabaseName=xxxx
agent1.sources.src1.hibernate.connection.user = xxxx
agent1.sources.src1.hibernate.connection.password = xxxx
agent1.sources.src1.table = xxxx
agent1.sources.src1.hibernate.connection.autocommit = true
# declare mysql hibernate dialect
agent1.sources.src1.hibernate.dialect = org.hibernate.dialect.SQLServerDialect
agent1.sources.src1.hibernate.connection.driver_class = com.microsoft.sqlserver.jdbc.SQLServerDriver
#agent1.sources.src1.hibernate.provider_class=org.hibernate.connection.C3P0ConnectionProvider
#agent1.sources.src1.columns.to.select = *
#agent1.sources.src1.incremental.column.name = PK, name, machine, time
#agent1.sources.src1.start.from=0
#agent1.sources.src1.incremental.value = 0
# query time interval
agent1.sources.src1.run.query.delay = 5000
# declare the folder loaction where flume state is saved
agent1.sources.src1.status.file.path = /home/user/flume-source-state
agent1.sources.src1.status.file.name = src1.status
agent1.sources.src1.batch.size = 1000
agent1.sources.src1.max.rows = 1000
agent1.sources.src1.delimiter.entry = |
# set the channel to memory mode
agent1.channels.ch1.type = memory
agent1.channels.ch1.capacity = 10000
agent1.channels.ch1.transactionCapacity = 10000
agent1.channels.ch1.byteCapacityBufferPercentage = 20
agent1.channels.ch1.byteCapacity = 800000
# declare sink type
agent1.sinks.sk1.type = org.apache.flume.sink.hbase.HBaseSink
agent1.sinks.sk1.table = yyyy
agent1.sinks.sk1.columnFamily = yyyy
agent1.sinks.sk1.hdfs.batchSize = 100
agent1.sinks.sk1.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
agent1.sinks.sk1.serializer.regex = ^\"(.*?)\",\"(.*?)\",\"(.*?)\"$
agent1.sinks.sk1.serializer.colNames = PK, name, machine, time
# bind source, channel, sink
agent1.sources.src1.channels = ch1
agent1.sinks.sk1.channel = ch1
But, I use a similar config file to transfer data from MySql to HBase. Luckily, it worked.
mysql-hbase.conf
# source, channel, sink
agent1.sources = src1
agent1.channels = ch1
agent1.sinks = sk1
# declare source type
agent1.sources.src1.type = org.keedio.flume.source.SQLSource
agent1.sources.src1.hibernate.connection.url = jdbc:mysql://xxxx:3306/userdb
agent1.sources.src1.hibernate.connection.user = xxxx
agent1.sources.src1.hibernate.connection.password = xxxx
agent1.sources.src1.table = xxxx
agent1.sources.src1.hibernate.connection.autocommit = true
# declare mysql hibernate dialect
agent1.sources.src1.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
agent1.sources.src1.hibernate.connection.driver_class = com.mysql.jdbc.Driver
#agent1.sources.src1.hibernate.provider_class=org.hibernate.connection.C3P0ConnectionProvider
#agent1.sources.src1.columns.to.select = *
#agent1.sources.src1.incremental.column.name = id
#agent1.sources.src1.incremental.value = 0
# query time interval
agent1.sources.src1.run.query.delay = 5000
# declare the folder loaction where flume state is saved
agent1.sources.src1.status.file.path = /home/user/flume-source-state
agent1.sources.src1.status.file.name = src1.status
#agent1.sources.src1.interceptors=i1
#agent1.sources.src1.interceptors.i1.type=search_replace
#agent1.sources.src1.interceptors.i1.searchPattern="
#agent1.sources.src1.interceptors.i1.replaceString=,
# Set the channel to memory mode
agent1.channels.ch1.type = memory
agent1.channels.ch1.capacity = 10000
agent1.channels.ch1.transactionCapacity = 10000
agent1.channels.ch1.byteCapacityBufferPercentage = 20
agent1.channels.ch1.byteCapacity = 800000
# declare sink type
agent1.sinks.sk1.type = org.apache.flume.sink.hbase.HBaseSink
agent1.sinks.sk1.table = user_test_2
agent1.sinks.sk1.columnFamily = user_hobby
agent1.sinks.sk1.hdfs.batchSize = 100
agent1.sinks.sk1.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
agent1.sinks.sk1.serializer.regex = ^\"(.*?)\",\"(.*?)\",\"(.*?)\",\"(.*?)\"$
agent1.sinks.sk1.serializer.colNames = id,name,age,hobby
# bind source, channel, sink
agent1.sources.src1.channels = ch1
agent1.sinks.sk1.channel = ch1
Does anyone know is there something wrong in the config file? Thanks.

How can i get the logs of past months in postgres DB .

Issue :
Someone has added a junk column in one of my table.I want to figure it out from the logs as when and from where this activity has been performed.
Please Help regarding this issue.
Make sure enable logging in postgresql.conf
1.log_destination = 'stderr' #log_destination = 'stderr,csvlog,syslog'
2.logging_collector = on #need restart
3.log_directory = 'pg_log'
4.log_file_name = 'postgresql-%Y-%m-%d_%H%M%S.log'
5.log_rotation_age = 1d
6.log_rotation_size = 10MB
7.log_min_error_statement = error
8.log_min_duration_statement = 5000 # -1 = disable ; 0 = ALL ; 5000 = 5sec
9.log_line_prefix = '|%m|%r|%d|%u|%e|'
10.log_statment = 'ddl' # 'none' | 'ddl' | 'mod' | 'all'
#prefer 'ddl' because the log output will be 'ddl' and 'query min duration'
If you don't enable it, make sure enable it now.
if you don't have log the last attempt is pg_xlogdump your xlog file under pg_xlog and look for DDL

breakpoint does not hit in phpstorm+xdebug

os: windows 7
web server: xampp 1.8.2 (php version: 5.4.27)
phpstorm: 6.0.3
in php.ini:
[XDebug]
zend_extension = "D:\xampp\php\ext\php_xdebug-2.2.4-5.4-vc9.dll"
;xdebug.default_enable=1
;xdebug.auto_trace=1
;xdebug.show_exception_trace = 1
;xdebug.collect_vars = 1
;xdebug.collect_params=1
;xdebug.collect_return=1
;xdebug.profiler_append = 1
;xdebug.profiler_enable = 1
;xdebug.profiler_enable_trigger = 1
xdebug.profiler_output_dir = "D:\xampp\tmp"
xdebug.profiler_output_name = "cachegrind.out.%t-%s"
xdebug.remote_enable = 0
;xdebug.remote_autostart = off
xdebug.remote_handler = "dbgp"
xdebug.remote_host = "localhost"
xdebug.trace_output_dir = "D:\xampp\tmp"
xdebug.remote_mode = "req"
xdebug.remote_port = 9001
xdebug.idekey="PHPSTORM"
when I debug web application in phpstorm. xdebug is working but as same as running.
I have set some breakpoints, but it does not stop there.
You have debugger disabled because of xdebug.remote_enable = 0 config line.
It has to be 1 (on/true).
http://confluence.jetbrains.com/display/PhpStorm/Xdebug+Installation+Guide

Sometimes isql doesn't connect and tsql does

I am stuck with a problem that happens sometimes around unixODBC and freeTDS. I have a CentOS webserver where I have settled configuration files as:
odbc.ini:
[XYZ]
Driver = FreeTDS
Server = X.X.X.X
Port = 1433
Database = mydatabase
TDS_Version = 8.0
odbcinst.ini
[PostgreSQL]
Description = ODBC for PostgreSQL
Driver = /usr/lib/libodbcpsql.so
Setup = /usr/lib/libodbcpsqlS.so
FileUsage = 1
[FreeTDS]
Description = v0.82 with protocol v8.0
Driver = /usr/local/lib/libtdsodbc.so
Setup = /usr/local/lib/libtdsodbc.so
UsageCount = 1
Trace = Yes
TraceFile = /tmp/freetds.log
ForceTrace = Yes
FileUsage = 1
[ODBC]
;Trace = Yes
;TraceFile = /tmp/freetds.log
;ForceTrace = Yes
;Pooling = No
freetds.conf
# $Id: freetds.conf,v 1.12 2007/12/25 06:02:36 jklowden Exp $
#
# This file is installed by FreeTDS if no file by the same
# name is found in the installation directory.
#
# For information about the layout of this file and its settings,
# see the freetds.conf manpage "man freetds.conf".
# Global settings are overridden by those in a database
# server specific section
[global]
# TDS protocol version
tds version = 8.0
# Whether to write a TDSDUMP file for diagnostic purposes
# (setting this to /tmp is insecure on a multi-user system)
; dump file = /tmp/freetds.log
; debug flags = 0xffff
# Command and connection timeouts
; timeout = 10
; connect timeout = 10
# If you get out-of-memory errors, it may mean that your client
# is trying to allocate a huge buffer for a TEXT field.
# Try setting 'text size' to a more reasonable limit
text size = 64512
# A typical Sybase server
[egServer50]
host = symachine.domain.com
port = 5000
tds version = 5.0
# A typical Microsoft server
[egServer70]
host = ntmachine.domain.com
port = 1433
tds version = 7.0
[XYZ]
host = X.X.X.X
port = 1433
tds version = 8.0
and doing isql -v XYZ username password when it won't connect it gives out this trace log:
[ODBC][22870][__handles.c][444]
Exit:[SQL_SUCCESS]
Environment = 0x938ab58
[ODBC][22870][SQLAllocHandle.c][345]
Entry:
Handle Type = 2
Input Handle = 0x938ab58
[ODBC][22870][SQLAllocHandle.c][463]
Exit:[SQL_SUCCESS]
Output Handle = 0x938b130
[ODBC][22870][SQLConnect.c][3549]
Entry:
Connection = 0x938b130
Server Name = [XYZ][length = 14 (SQL_NTS)]
User Name = [username][length = 11 (SQL_NTS)]
Authentication = [*************][length = 13 (SQL_NTS)]
UNICODE Using encoding ASCII 'ISO8859-1' and UNICODE 'UCS-2LE'
DIAG [42000] [FreeTDS][SQL Server]Login failed for user 'username'.
DIAG [42000] [FreeTDS][SQL Server]Cannot open database "mydatabase" requested by the login. The login failed.
DIAG [S1000] [FreeTDS][SQL Server]Unable to connect to data source
[ODBC][22870][SQLConnect.c][3917]
Exit:[SQL_ERROR]
[ODBC][22870][SQLError.c][424]
Entry:
Connection = 0x938b130
SQLState = 0xbf8ba54e
Native = 0xbf8ba350
Message Text = 0xbf8ba359
Buffer Length = 500
Text Len Ptr = 0xbf8ba356
[ODBC][22870][SQLError.c][461]
Exit:[SQL_SUCCESS]
SQLState = S1000
Native = 0xbf8ba350 -> 0
Message Text = [[unixODBC][FreeTDS][SQL Server]Unable to connect to data source]
[ODBC][22870][SQLError.c][424]
Entry:
Connection = 0x938b130
SQLState = 0xbf8ba54e
Native = 0xbf8ba350
Message Text = 0xbf8ba359
Buffer Length = 500
Text Len Ptr = 0xbf8ba356
[ODBC][22870][SQLError.c][461]
Exit:[SQL_SUCCESS]
SQLState = 37000
Native = 0xbf8ba350 -> 4060
Message Text = [[unixODBC][FreeTDS][SQL Server]Cannot open database "mydatabase" requested by the login. The login failed.]
[ODBC][22870][SQLError.c][424]
Entry:
Connection = 0x938b130
SQLState = 0xbf8ba54e
Native = 0xbf8ba350
Message Text = 0xbf8ba359
Buffer Length = 500
Text Len Ptr = 0xbf8ba356
[ODBC][22870][SQLError.c][461]
Exit:[SQL_SUCCESS]
SQLState = 37000
Native = 0xbf8ba350 -> 18456
Message Text = [[unixODBC][FreeTDS][SQL Server]Login failed for user 'username'.]
[ODBC][22870][SQLError.c][424]
Entry:
Connection = 0x938b130
SQLState = 0xbf8ba54e
Native = 0xbf8ba350
Message Text = 0xbf8ba359
Buffer Length = 500
Text Len Ptr = 0xbf8ba356
[ODBC][22870][SQLError.c][461]
Exit:[SQL_NO_DATA]
[ODBC][22870][SQLError.c][504]
Entry:
Environment = 0x938ab58
SQLState = 0xbf8ba54e
Native = 0xbf8ba350
Message Text = 0xbf8ba359
Buffer Length = 500
Text Len Ptr = 0xbf8ba356
[ODBC][22870][SQLError.c][541]
Exit:[SQL_NO_DATA]
[ODBC][22870][SQLFreeHandle.c][268]
Entry:
Handle Type = 2
Input Handle = 0x938b130
[ODBC][22870][SQLFreeHandle.c][317]
Exit:[SQL_SUCCESS]
[ODBC][22870][SQLFreeHandle.c][203]
Entry:
Handle Type = 1
Input Handle = 0x938ab58
when tsql command works... what could be?
unixODBC version is 2.2.11-7.1 and freeTDS doesn't appear as installed but it there is since there are present libtdsodbc.so in /usr/local/lib..
I have also to say only with a complete reboot of CentOS machine isql goes back working well...
What I can do or check? Thanks a lot in advance!
Cheers,
Luigi
you should fix odbc.ini to have a servername as configured at freetds.conf
instead of "Server = x.x.x.x" put "servername = XYZ"

Resources