I am using javamail (1.5.5) to fetch mails from exchange 2010 server. It fails to fetch few messages (java.lang.OutOfMemoryError: Java heap space). But it is working fine when using java mail api 1.4.5 without heap space error.
What's strange here is the size of that peculiar message is 136516 Bytes. So when using IMAP, it should be fetched within 9 batches ( default fetch size is 16KB ). But from the debug trace referred below, something is strange.
Here is my code:
Message messObj = message[i]; String messSubject = null;
String messMsgId = null;
java.io.ByteArrayOutputStream bos = new java.io.ByteArrayOutputStream();
messObj.writeTo(bos);
bos.close();
javax.mail.util.SharedByteArrayInputStream bis = new javax.mail.util.SharedByteArrayInputStream(bos.toByteArray());
MimeMessage cmsg = new MimeMessage(session, bis);
bis.close(); mp.setMessage(cmsg);
messSubject = cmsg.getSubject();
messMsgId = ((MimeMessage) cmsg).getMessageID();
System.out.println("debugging:::subject::"+messSubject+"::msgid::"+messMsgId);
FLAGS (\Seen \Answered))|
[14:48:21:205]|[09-26-2018]|[SYSOUT]|[INFO]|[73]: A3893 OK FETCH
completed.| [14:48:21:205]|[09-26-2018]|[SYSOUT]|[INFO]|[73]: A3894
FETCH 1 (BODY[]<536512690.16384>)|
[14:48:21:220]|[09-26-2018]|[SYSOUT]|[INFO]|[73]: * 1 FETCH (UID 91149
BODY[] {137921}|
FLAGS (\Seen \Answered))|
[14:48:21:236]|[09-26-2018]|[SYSOUT]|[INFO]|[73]: A3894 OK FETCH
completed.| [14:48:21:236]|[09-26-2018]|[SYSOUT]|[INFO]|[73]: A3895
FETCH 1 (BODY[]<536650611.16384>)|
[14:48:21:251]|[09-26-2018]|[SYSOUT]|[INFO]|[73]: * 1 FETCH (UID 91149
BODY[] {137921}|
--5c3ef07b8737170331214bb435ce7be6--| [14:48:21:267]|[09-26-2018]|[SYSOUT]|[INFO]|[73]: FLAGS (\Seen
\Answered))| [14:48:21:267]|[09-26-2018]|[SYSOUT]|[INFO]|[73]: A3895
OK FETCH completed.| [14:48:21:267]|[09-26-2018]|[SYSOUT]|[INFO]|[73]:
A3896 FETCH 1 (BODY[]<536788532.16384>)|
[14:48:21:283]|[09-26-2018]|[SYSOUT]|[INFO]|[73]: * 1 FETCH (UID 91149
BODY[] {137921}|
Error occurs before the system print. Each time the entire message is fetched and it is fetched for 3896 times ( after that memory heap error occurs and stops the process ). Could someone tell me what am I missing here?
Related
I have Flink job running in AWS Kinesis Analytics that does the following.
1 - I have Table on a Kinesis Stream - Called MainEvents.
2 - I have a Sink Table that is pointing to Kinesis Stream - Called perMinute.
The perMinute is populated using the MainEvents table as input and generates a sliding window(hop) agg.
So far so good.
My final consumer is a Kineis Python Script that reads the input from perMinute Kinesis Stream.
This is my Consumer Script.
stream_name = 'perMinute'
ses = boto3.session.Session()
kinesis_client = ses.client('kinesis')
response = kinesis_client.describe_stream(StreamName=stream_name)
shard_id = response['StreamDescription']['Shards'][0]['ShardId']
response = kinesis_client.get_shard_iterator(
StreamName=stream_name,
ShardId=shard_id,
ShardIteratorType='LATEST'
)
shard_iterator = response['ShardIterator']
while shard_iterator is not None:
result = kinesis_client.get_records(ShardIterator=shard_iterator, Limit=1)
records = result["Records"]
shard_iterator = result["NextShardIterator"]
for record in records:
data = str(record["Data"])
print(data)
time.sleep(1)
The issue i have is that i get encoded data, that looks like.
b'{"window_start":"2022-09-28 04:01:46","window_end":"2022-09-28 04:02:46","counts":300}'
b'{"window_start":"2022-09-28 04:02:06","window_end":"2022-09-28 04:03:06","counts":478}'
b'\xf3\x89\x9a\xc2\n$4a540599-485d-47c5-9a7e-ca46173b30de\n$2349a5a3-7949-4bde-95a8-4019a077586b\x1aX\x08\x00\x1aT{"window_start":"2022-09-28 04:02:16","window_end":"2022-09-28 04:03:16","counts":504}\x1aX\x08\x01\x1aT{"window_start":"2022-09-28 04:02:18","window_end":"2022-09-28 04:03:18","counts":503}\xc3\xa1\xfe\xfa9j\xeb\x1aP\x917F\xf3\xd2\xb7\x02'
b'\xf3\x89\x9a\xc2\n$23a0d76c-6939-4eda-b5ee-8cd2b3dc1c1e\n$7ddf1c0c-16fe-47a0-bd99-ef9470cade28\x1aX\x08\x00\x1aT{"window_start":"2022-09-28 04:02:30","window_end":"2022-09-28 04:03:30","counts":531}\x1aX\x08\x01\x1aT{"window_start":"2022-09-28 04:02:36","window_end":"2022-09-28 04:03:36","counts":560}\x0c>.\xbd\x0b\xac.\x9a\xe8z\x04\x850\xd5\xa6\xb3'
b'\xf3\x89\x9a\xc2\n$2cacfdf8-a09b-4fa3-b032-6f1707c966c3\n$27458e17-8a3a-434e-9afd-4995c8e6a1a4\n$11774332-d906-4486-a959-28ceec0d134a\x1aY\x08\x00\x1aU{"window_start":"2022-09-28 04:02:42","window_end":"2022-09-28 04:03:42","counts":1625}\x1aY\x08\x01\x1aU{"window_start":"2022-09-28 04:02:50","window_end":"2022-09-28 04:03:50","counts":2713}\x1aY\x08\x02\x1aU{"window_start":"2022-09-28 04:03:00","window_end":"2022-09-28 04:04:00","counts":3009}\xe1G\x18\xe7_a\x07\xd3\x81O\x03\xf9Q\xaa\x0b_'
Some Records are valid, the first two and the other records seems to have multiple entries on the same row.
How can i get rid of the extra characters that are not part of the json payload and get one line per invocation ?
If i would use decode('utf-8'), i get few record out ok but when it reaches a point if fails with:
while shard_iterator is not None:
result = kinesis_client.get_records(ShardIterator=shard_iterator, Limit=1)
records = result["Records"]
shard_iterator = result["NextShardIterator"]
for record in records:
data = record["Data"].decode('utf-8')
# data = record["Data"].decode('latin-1')
print(data)
time.sleep(1)
{"window_start":"2022-09-28 03:59:24","window_end":"2022-09-28 04:00:24","counts":319}
{"window_start":"2022-09-28 03:59:28","window_end":"2022-09-28 04:00:28","counts":366}
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-108-0e632a57c871> in <module>
39 shard_iterator = result["NextShardIterator"]
40 for record in records:
---> 41 data = record["Data"].decode('utf-8')
43 print(data)
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 0-2: invalid continuation byte
If i use decode('latin-1') it does not fail but i get alot of crazy text out
{"window_start":"2022-09-28 04:02:06","window_end":"2022-09-28 04:03:06","counts":478}
óÂ
$4a540599-485d-47c5-9a7e-ca46173b30de
$2349a5a3-7949-4bde-95a8-4019a077586bXT{"window_start":"2022-09-28 04:02:16","window_end":"2022-09-28 04:03:16","counts":504}XT{"window_start":"2022-09-28 04:02:18","window_end":"2022-09-28 04:03:18","counts":503}áþú9jëP7FóÒ·
óÂ
here is the stream producer flink code
-- create sink
CREATE TABLE perMinute (
window_start TIMESTAMP(3) NOT NULL,
window_end TIMESTAMP(3) NOT NULL,
counts BIGINT NOT NULL
)
WITH (
'connector' = 'kinesis',
'stream' = 'perMinute',
'aws.region' = 'ap-southeast-2',
'scan.stream.initpos' = 'LATEST',
'format' = 'json',
'json.timestamp-format.standard' = 'ISO-8601'
);
%flink.ssql(type=update)
insert into perMinute
SELECT window_start, window_end, COUNT(DISTINCT event) as counts
FROM TABLE(
HOP(TABLE MainEvents, DESCRIPTOR(eventtime), INTERVAL '5' SECOND, INTERVAL '60' SECOND))
GROUP BY window_start, window_end;
Thanks
I generated code from MIB file with mib2c. When I try to set object with read-write access, it returns Error in packet. Reason: notWritable (That object does not support modification.
I tried to run my subagent with few debug flags. I found out that not a single function generated code is called on snmpset request, only on snmpget. smnpget on exactly same OID will return valid value. I have user with RW access everywhere. I can set value to sysName.0 with same user. I tried removing MIB file and use exact oid but had same result.
Because It's not even reaching code, I don't know much what to do.
I tried it with 2 tables generated same way.
One table has index as IMPLIED DisplayString and second table has INDEX as combination of 2 INTEGERs.
EDIT:
I found out that it created .conf file in /var/lib/snmp/ for each my agent. I tried to add create_user with same name & password but it disappeared after agent was started again.
EDIT2:
Code was generetad using mib2c.mfd.conf . I tried mib2c.iterate.conf and it called function from generated code. It's not working with mib2c.mfd.conf but looks like it will work with mib2c.iterate.conf . I would like to be able make it works with mib2c.mfd.conf so I wouldn't need to change all subagents.
Output from my subagent where 3.fw is index:
agentx/subagent: checking status of session 0x44150
agentx_build: packet built okay
agentx/subagent: synching input, op 0x01
agentx/subagent: session 0x44150 responded to ping
agentx/subagent: handling AgentX request (req=0x1f9,trans=0x1f8,sess=0x21)
agentx/subagent: -> testset
snmp_agent: agent_sesion 0xc4a08 created
snmp_agent: add_vb_to_cache( 0xc4a08, 1, MSE-CONFIGURATION-MIB::mseDpuConfigActivationAdminStatus.3.fw, 0x3d3d0)
snmp_agent: tp->start MSE-CONFIGURATION-MIB::mseDpuConfigActivationTable, tp->end MSE-CONFIGURATION-MIB::mseDpuConfigActivation.3,
agent_set: doing set mode = 0 (SET_RESERVE1)
agent_set: did set mode = 0, status = 17
results: request results (status = 17):
results: MSE-CONFIGURATION-MIB::mseDpuConfigActivationAdminStatus.3.fw = INTEGER: prepare(1)
snmp_agent: REMOVE session == 0xc4a08
snmp_agent: agent_session 0xc4a08 released
snmp_agent: end of handle_snmp_packet, asp = 0xc4a08
agentx/subagent: handling agentx subagent set response (mode=162,req=0x1f9,trans=0x1f8,sess=0x21)
agentx_build: packet built okay
agentx/subagent: FINISHED
agentx/subagent: handling AgentX request (req=0x1fa,trans=0x1f8,sess=0x21)
agentx/subagent: -> cleanupset
snmp_agent: agent_sesion 0xc7640 created
agent_set: doing set mode = 4 (SET_FREE)
agent_set: did set mode = 4, status = 17
results: request results (status = 17):
results: MSE-CONFIGURATION-MIB::mseDpuConfigActivationAdminStatus.3.fw = INTEGER: prepare(1)
snmp_agent: REMOVE session == 0xc7640
snmp_agent: agent_session 0xc7640 released
snmp_agent: end of handle_snmp_packet, asp = 0xc7640
agentx/subagent: handling agentx subagent set response (mode=162,req=0x1fa,trans=0x1f8,sess=0x21)
agentx_build: packet built okay
agentx/subagent: FINISHED
agentx/subagent: checking status of session 0x44150
agentx_build: packet built okay
agentx/subagent: synching input, op 0x01
agentx/subagent: session 0x44150 responded to ping
Values/config used for generating code:
## defaults
#eval $m2c_context_reg = "netsnmp_data_list"#
#eval $m2c_data_allocate = 0#
#eval $m2c_data_cache = 1#
#eval $m2c_data_context = "generated"# [generated|NAME]
#eval $m2c_data_init = 1#
#eval $m2c_data_transient = 0#
#eval $m2c_include_examples = 1#
#eval $m2c_irreversible_commit = 0#
#eval $m2c_table_access = "container-cached"#
#eval $m2c_table_dependencies = 0#
#eval $m2c_table_persistent = 0#
#eval $m2c_table_row_creation = 0#
#eval $m2c_table_settable = 1#
#eval $m2c_table_skip_mapping = 1#
#eval $m2c_table_sparse = 1#
#eval $mfd_generate_makefile = 1#
#eval $mfd_generate_subagent = 1#
SNMPd version:
# snmpd --version
NET-SNMP version: 5.9
Web: http://www.net-snmp.org/
Email: net-snmp-coders#lists.sourceforge.net
I found out that in generated file *_interface.c from mib2c.mfd.conf template, there is inverted check.
#if !(defined(NETSNMP_NO_WRITE_SUPPORT) || defined(NETSNMP_DISABLE_SET_SUPPORT))
HANDLER_CAN_RONLY
#else
HANDLER_CAN_RWRITE
#endif /* NETSNMP_NO_WRITE_SUPPORT || NETSNMP_DISABLE_SET_SUPPORT */
I removed ! from condition and it stared working. Both defines are undefined so it should use HANDLER_CAN_RWRITE but because of wrong check it used HANDLER_CAN_RONLY.
I am finding my database is the bottleneck in my application, as part of this it looks like Prepared statements are not being reused.
For example here method I use
public static CoverImage findCoverImageBySource(Session session, String src)
{
try
{
Query q = session.createQuery("from CoverImage t1 where t1.source=:source");
q.setParameter("source", src, StandardBasicTypes.STRING);
CoverImage result = (CoverImage)q.setMaxResults(1).uniqueResult();
return result;
}
catch (Exception ex)
{
MainWindow.logger.log(Level.SEVERE, ex.getMessage(), ex);
}
return null;
}
But using Yourkit profiler it says
com.mchange.v2.c3po.impl.NewProxyPreparedStatemtn.executeQuery() Count 511
com.mchnage.v2.c3po.impl.NewProxyConnection.prepareStatement() Count 511
and I assume that the count for prepareStatement() call should be lower, ais it is looks like we create a new prepared statment every time instead of reusing.
https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html
I am using C3po connecting poolng wehich complicates things a little, but as I understand it I have it configured correctly
public static Configuration getInitializedConfiguration()
{
//See https://www.mchange.com/projects/c3p0/#hibernate-specific
Configuration config = new Configuration();
config.setProperty(Environment.DRIVER,"org.h2.Driver");
config.setProperty(Environment.URL,"jdbc:h2:"+Db.DBFOLDER+"/"+Db.DBNAME+";FILE_LOCK=SOCKET;MVCC=TRUE;DB_CLOSE_ON_EXIT=FALSE;CACHE_SIZE=50000");
config.setProperty(Environment.DIALECT,"org.hibernate.dialect.H2Dialect");
System.setProperty("h2.bindAddress", InetAddress.getLoopbackAddress().getHostAddress());
config.setProperty("hibernate.connection.username","jaikoz");
config.setProperty("hibernate.connection.password","jaikoz");
config.setProperty("hibernate.c3p0.numHelperThreads","10");
config.setProperty("hibernate.c3p0.min_size","1");
//Consider that if we have lots of busy threads waiting on next stages could we possibly have alot of active
//connections.
config.setProperty("hibernate.c3p0.max_size","200");
config.setProperty("hibernate.c3p0.max_statements","5000");
config.setProperty("hibernate.c3p0.timeout","2000");
config.setProperty("hibernate.c3p0.maxStatementsPerConnection","50");
config.setProperty("hibernate.c3p0.idle_test_period","3000");
config.setProperty("hibernate.c3p0.acquireRetryAttempts","10");
//Cancel any connection that is more than 30 minutes old.
//config.setProperty("hibernate.c3p0.unreturnedConnectionTimeout","3000");
//config.setProperty("hibernate.show_sql","true");
//config.setProperty("org.hibernate.envers.audit_strategy", "org.hibernate.envers.strategy.ValidityAuditStrategy");
//config.setProperty("hibernate.format_sql","true");
config.setProperty("hibernate.generate_statistics","true");
//config.setProperty("hibernate.cache.region.factory_class", "org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory");
//config.setProperty("hibernate.cache.use_second_level_cache", "true");
//config.setProperty("hibernate.cache.use_query_cache", "true");
addEntitiesToConfig(config);
return config;
}
Using H2 1.3.172, Hibernate 4.3.11 and the corresponding c3po for that hibernate version
With reproducible test case we have
HibernateStats
HibernateStatistics.getQueryExecutionCount() 28
HibernateStatistics.getEntityInsertCount() 119
HibernateStatistics.getEntityUpdateCount() 39
HibernateStatistics.getPrepareStatementCount() 189
Profiler, method counts
GooGooStaementCache.aquireStatement() 35
GooGooStaementCache.checkInStatement() 189
GooGooStaementCache.checkOutStatement() 189
NewProxyPreparedStatement.init() 189
I don't know what I shoud be counting as creation of prepared statement rather than reusing an existing prepared statement ?
I also tried enabling c3p0 logging by adding a c3p0 logger ands making it use same log file in my LogProperties but had no effect.
String logFileName = Platform.getPlatformLogFolderInLogfileFormat() + "songkong_debug%u-%g.log";
FileHandler fe = new FileHandler(logFileName, LOG_SIZE_IN_BYTES, 10, true);
fe.setEncoding(StandardCharsets.UTF_8.name());
fe.setFormatter(new com.jthink.songkong.logging.LogFormatter());
fe.setLevel(Level.FINEST);
MainWindow.logger.addHandler(fe);
Logger c3p0Logger = Logger.getLogger("com.mchange.v2.c3p0");
c3p0Logger.setLevel(Level.FINEST);
c3p0Logger.addHandler(fe);
Now that I have eventually got c3p0Based logging working and I can confirm the suggestion of #Stevewaldman is correct.
If you enable
public static Logger c3p0ConnectionLogger = Logger.getLogger("com.mchange.v2.c3p0.stmt");
c3p0ConnectionLogger.setLevel(Level.FINEST);
c3p0ConnectionLogger.setUseParentHandlers(false);
Then you get log output of the form
24/08/2019 10.20.12:BST:FINEST: com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache ----> CACHE HIT
24/08/2019 10.20.12:BST:FINEST: checkoutStatement: com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 347; checked out: 1; num connections: 13; num keys: 347
24/08/2019 10.20.12:BST:FINEST: checkinStatement(): com.mchange.v2.c3p0.stmt.DoubleMaxStatementCache stats -- total size: 347; checked out: 0; num connections: 13; num keys: 347
making it clear when you get a cache hit. When there is no cache hit yo dont get the first line, but get the other two lines.
This is using C3p0 9.2.1
Please help I was trying to call watson assistant endpoint
https://gateway.watsonplatform.net/assistant/api/v1/workspaces/myworkspace/logs?version=2018-09-20 to get all the list of events
and filter by date range using this params
var param =
{ workspace_id: '{myworkspace}',
page_limit: 100000,
filter: 'response_timestamp%3C2018-17-12,response_timestamp%3E2019-01-01'}
apparently I got any empty response below.
{
"logs": [],
"pagination": {}
}
Couple of things to check.
1. You have 2018-17-12 which is a metric date. This translates to "12th day of the 17th month of 2018".
2. Assuming the date should be a valid one, your search says "Documents that are Before 17th Dec 2018 and after 1st Jan 2019". Which would return no documents.
3. Logs are only generated when you call the message() method through the API. So check your logging page in the tooling to see if you even have logs.
4. If you have a lite account logs are only stored for 7 days and then deleted. To keep logs longer you need to upgrade to a standard account.
Although not directly related to your issue, be aware that page_limit has an upper hard coded limit (IIRC 200-300?). So you may ask for 100,000 records, but it won't give it to you.
This is sample python code (unsupported) that is using pagination to read the logs:
from watson_developer_cloud import AssistantV1
username = '...'
password = '...'
workspace_id = '....'
url = '...'
version = '2018-09-20'
c = AssistantV1(url=url, version=version, username=username, password=password)
totalpages = 999
pagelimit = 200
logs = []
page_count = 1
cursor = None
count = 0
x = { 'pagination': 'DUMMY' }
while x['pagination']:
if page_count > totalpages:
break
print('Reading page {}. '.format(page_count), end='')
x = c.list_logs(workspace_id=workspace_id,cursor=cursor,page_limit=pagelimit)
if x is None: break
print('Status: {}'.format(x.get_status_code()))
x = x.get_result()
logs.append(x['logs'])
count = count + len(x['logs'])
page_count = page_count + 1
if 'pagination' in x and 'next_url' in x['pagination']:
p = x['pagination']['next_url']
u = urlparse(p)
query = parse_qs(u.query)
cursor = query['cursor'][0]
Your logs object should contain the logs.
I believe the limit is 500, and then we return a pagination URL so you can get the next 500. I dont think this is the issue but once you start getting logs back its good to know
I'm coding in vb.net (I will accept answers in c# as I can approach this either way) and I need to send data to and receive data from a Vex Cortex. It has a USB port on it which I will be connecting to a computer to send it data via a program.
I have researched this and the .net side seems fairly straightforward. Here is the code I currently have:
Dim sensor As New SerialPort("COM1")
sensor.BaudRate = 9600
sensor.Parity = Parity.None
sensor.StopBits = StopBits.One
sensor.DataBits = 8
sensor.ReadTimeout = 300
sensor.WriteTimeout = 300
sensor.Handshake = Handshake.None
sensor.Open()
I was looking around and i found some steps to take to send data from the program, which is below:
Dim byteOut(5) As Byte
Dim byteIn(6) As Byte
Dim Voltage, i As Integer
Try
byteOut(0) = &H2 '2 bytes in output message
byteOut(1) = &H0 'should be 0 for NXT
byteOut(2) = &H0 '&H0 = reply expected &H80 = no reply expected
byteOut(3) = &HB '$HB = read battery command
sensor.Write(byteOut, 0, 4) '0 = offset into byteOut and 4 = number of bytes
'now read the reply
byteIn(0) = sensor.ReadByte ' number of bytes in message
byteIn(1) = sensor.ReadByte ' should be 0 for NXT
For i = 2 To 1 + byteIn(0) ' read rest of message
byteIn(i) = sensor.ReadByte()
Next
Voltage = byteIn(5) + byteIn(6) * 256 ' the voltage has low byte in 5 and high in 6
Console.Write(Voltage) 'display voltage
Catch ex As Exception
MsgBox(ex.ToString)
End Try
The example above is used to return battery data for an irrelevant device for this question. Basically I need to be able to read the data I send, and send data back to the computer, on the Vex Cortex.
The Cortex is coded in c and I'm using RobotC to compile and download the code to the Cortex. I can't figure out how to read the data for the life of me (example code said to do Serial.begin(9600); but this raises a compilation error in RobotC).
Can anyone assist me on how to read (and write back) data on the Cortex end of this transfer? I may need to ditch RobotC and I'm perfectly fine with that if a solution is proposed without it, so long as it's still c code.