I am seeing an issue with iOS6, AFNetworking and NSURLConnection that doesn't make any sense.
To reproduce the issue
Make a request, any request, and make sure its successful
Turn off the internet connectivity
Launch the app again and make the same request again
iOS5.1
2012-10-09 15:30:43.667 CACHE TEST[3811:1bb03] <NSMutableURLRequest http://myserver.in/someImage.png>
2012-10-09 15:30:43.674 CACHE TEST[3811:1bb03] -[AFURLConnectionOperation connection:didReceiveData:]
2012-10-09 15:30:43.675 CACHE TEST[3811:1bb03] -[AFURLConnectionOperation connectionDidFinishLoading:]
Data is picked up [from NSURLCache I assume], and connectionDidFinishLoading is called.
iOS6.0
2012-10-09 15:32:34.578 CACHE TEST [3877:1e903] <NSMutableURLRequest http://myserver.in/someImage.png>
2012-10-09 15:32:34.578 CACHE TEST [3877:1e903] -[AFURLConnectionOperation connection:didFailWithError:]
Data is NOT picked up [I verified that the NSURLCache's Cache.db does have that image in the sqlite db as a blob], and connection:didFailWithError: is called.
I did not find anything in the API diffs that would suggest that there's been any relevant change in either NSURLConnection, NSURLRequest or NSURLCache.
The result is that while the success block is called when the app is run offline on 5.1, the failure block gets called on iOS6.0
UPDATE
Wrote a test to confirm that the NSURLConnection, NSURLRequest or NSURLCache do not behave differently.
Find it at http://github.com/abhinittiwari/NSURLCacheTest
Related
I am using XTDB 1.21.0 deployed on AWS/ECS (Fargate) with checkpoints configured (frequency 30 minutes) and stored on an S3 bucket (RocksDB). After a couple of successful checkpoints, they seem to be constantly failing with an XTDB warning due to an exception in the HTTP request to AWS, as shown below:
This leaves the S3 buckets with incomplete checkpoints (i.e., a Folder containing a set of SSTs and other RocksDB files and no associated EDN index file):
XTDB documentation mentions the fact that an optional S3configurator can be passed to the node configuration and after a bit of Googling around I figured that makeClient should be overridden so that connectionAcquisitionTimeout can be set:
NettyNioAsyncHttpClient.builder()
.maxConcurrency(200)
.connectionAcquisitionTimeout(Duration.ofMillis(20000))
I am not too familiar with NETTY so would appreciate if someone could help with the right incantation.
Also I am configuring the XT node from an EDN file, and haven't figure out how to write a S3 configurator in an EDN file (or if it is even possible).
Thanks in advance!
This can happen for large datasets where the default S3 client used will create a new async request for each object (for which the number of objects may be very large, particularly if using the RockDBs index). Internally it uses the connectionAcquisitionTimeout as a type of backpressure to ensure that incoming requests don't wait indefinitely for a connection from the connection pool, however, in this case we're the only source of these requests and we definitely want the requests to complete before starting the nodes so it's reasonable to set the connectionAcquisitionTimeout to something very high (the default is only 10 seconds). A good choice of limit might be something like the maximum amount of time you want to wait for the node to start before failing.
This appears to be a non-optional parameter of the SDK for what I can only assume is a sensible default strategy for requests coming from an external source, in our case we essentially want it to behave as if it was a synchronous operation.
Configuring this in Clojure with xtdb would look something like this:
(ns foo.db
(:require
[xtdb.api :as xtdb]
[xtdb.checkpoint]
[xtdb.rocksdb]
[xtdb.s3.checkpoint])
(:import
(java.time Duration)
(software.amazon.awssdk.http.nio.netty NettyNioAsyncHttpClient)
(software.amazon.awssdk.services.s3 S3AsyncClient)
(xtdb.checkpoint Checkpointer)
(xtdb.s3 S3Configurator)))
(def s3-configurator
(reify S3Configurator
(makeClient [this]
(.. (S3AsyncClient/builder)
(httpClientBuilder
(.. (NettyNioAsyncHttpClient/builder)
(connectionAcquisitionTimeout
(Duration/ofSeconds 600)) ;; Set a high limit here
;; We can rely on the defaults for maxConcurrency and
;; maxPendingConnectionAcquires
;; (maxConcurrency (Integer. 200))
;; (maxPendingConnectionAcquires (Integer. 10000))
))
(build)))))
(defn start-node!
[]
(xtdb/start-node
{:xtdb/index-store
{:kv-store {:xtdb/module 'xtdb.rocksdb/->kv-store
:db-dir "/var/xtdb/idxs"
:checkpointer {:xtdb/module 'xtdb.checkpoint/->checkpointer
:store {:xtdb/module 'xtdb.s3.checkpoint/->cp-store
:configurator (constantly s3-configurator)
:bucket "checkpoints"}
:approx-frequency "PT3H"}}}}))
Kiwi version 6.0, tcms-api 5.0.
Given that 82 is a valid test run_id and 7 is a valid build_id for the test run's product in the Kiwi instance, run this Python snippet:
from tcms_api import TCMS
kiwi = TCMS()
kiwi.exec.TestRun.update(82, {'build' : 7})
Expect:
The test run's product build is updated from 1 (unspecified) to 7.
Result:
Exception has occurred: xmlrpc.client.Fault
<Fault -32603: "Internal error: 'status'">
There's no other call stack information, so I'm at a loss to further debug. I've tried updating a couple of different fields (manager and status) with the same result. I also get the same result if the value I'm trying to update is unknown/invalid.
Additional information: the equivalent call to TestCaseRun.update() API works. I.e., I can update the build information on a TestCaseRun instance.
#s-manke. This is a genuine bug. I've implemented a hot-fix here: https://github.com/kiwitcms/Kiwi/pull/553
so you can at least continue to use the API.
I am in the process of cutting a new version anyway so this hot-fix will go in. However the API will not process status or stop_date fields at the moment.
when i try to synchronize my caldav server implementation with Thunderbird 45.4.0 and Lightning 4.7.4 (one particular calendar collection) it doesnt show any data or events in the calendar though the last call of the sequence provided the data.
In the Thunderbird error log i can see one error:
Zeitstempel: 07.11.16, 14:21:12
Fehler: [calCachedCalendar] replay action failed: null,
uri=http://127.0.0.1:8003/sap/sports/webdav/appsvc/webdav/services/
server.xsjs/cal/_D043133/, result=2147500037, op=[xpconnect wrapped
calIOperation]
Quelldatei:
file:///Users/d043133/Library/Thunderbird/Profiles/hfbvuk9f.default/
extensions/%7Be2fda1a4-762b-4020-b5ad-a41df1933103%7D/calendar-
js/calCachedCalendar.js
Zeile: 327
the call sequence is as follows (detailed content via gist-links):
Propfind Request - Response
Options Request - Response
Propfind Request - Response
Report Request - Response - Response Raw
The synchronization with other clients like macOS-calendar and ios-calendar works in principle and shows the data. Does anyone has a clue what is going wrong here?
Not sure whether that is the cause but I can see two incorrect things:
a) Your <href/> property has trailing spaces:
<d:href>/sap/sports/webdav/appsvc/webdav/services/server.xsjs/cal/_D043133/EVENT%3A070768ba5dd78ff15458f1985cdaabb1.ics
</d:href>
b) your ORGANIZER property is not a valid URI
ORGANIZER:_D043133
i was able to find the cause of the above issue by debugging Thunderbird as propsed by Philipp. The Report Response has http status code 200, but as it is a multistatus response Thunderbird/Lightning expects status code 207 ;-)
Thanks for the hints!
I am writing a Ring / Compojure app with Clojure that fetches content for pages from database. To be able to create tests for how the content is displayed, I created prod and dev environments and when using dev environment, a mock database is used instead of the production database. I achieve this by reading the database from another file and giving it as a parameter to my routes. Here's a simplified version:
(defn www-routes [db]
(defroutes www-routes
(GET "/" [] ...)))
(def config (delay (load-file (.getFile (resource "config.clj")))))
(defn db []
(if (= "dev" (:database #(force config)))
'kipsu.db-mock
'kipsu.database))
(def app (routes (www-routes (db)))
The setup is largely taken from the example here, with the addition of setting the database as a parameter.
This setup works great with running the tests with the mock database and displaying real content on prod environment. Things run fine when I start a lein server locally, run tests or any of the functions in lein repl. My problem comes when I'd like to create an uberjar for deploying the changes on my server.
This is where I get a NullPointerException when compiling, starting from (db) function call inside the def app. I've tried debugging with poor success and am not even 100% sure where the actual error is. All I know is the db function is never even called. Here's the stack trace:
Compiling kipsu.jdbc.json
Compiling kipsu.database
Compiling kipsu.api_converter
Compiling kipsu.web
java.lang.NullPointerException, compiling:(web.clj:124:29)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3628)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3622)
at clojure.lang.Compiler$BodyExpr.eval(Compiler.java:5879)
at clojure.lang.Compiler$DefExpr.eval(Compiler.java:439)
at clojure.lang.Compiler.compile1(Compiler.java:7323)
at clojure.lang.Compiler.compile(Compiler.java:7390)
at clojure.lang.RT.compile(RT.java:399)
at clojure.lang.RT.load(RT.java:444)
at clojure.lang.RT.load(RT.java:412)
at clojure.core$load$fn__5448.invoke(core.clj:5866)
at clojure.core$load.doInvoke(core.clj:5865)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5671)
at clojure.core$compile$fn__5453.invoke(core.clj:5877)
at clojure.core$compile.invoke(core.clj:5876)
at user$eval9$fn__16.invoke(form-init1768231915654429312.clj:1)
at user$eval9.invoke(form-init1768231915654429312.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6782)
at clojure.lang.Compiler.eval(Compiler.java:6772)
at clojure.lang.Compiler.load(Compiler.java:7227)
at clojure.lang.Compiler.loadFile(Compiler.java:7165)
at clojure.main$load_script.invoke(main.clj:275)
at clojure.main$init_opt.invoke(main.clj:280)
at clojure.main$initialize.invoke(main.clj:308)
at clojure.main$null_opt.invoke(main.clj:343)
at clojure.main$main.doInvoke(main.clj:421)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:383)
at clojure.lang.AFn.applyToHelper(AFn.java:156)
at clojure.lang.Var.applyTo(Var.java:700)
at clojure.main.main(main.java:37)
Caused by: java.lang.NullPointerException
at kipsu.web$fn__4568.invoke(web.clj:115)
at clojure.lang.Delay.deref(Delay.java:37)
at clojure.lang.Delay.force(Delay.java:27)
at clojure.core$force.invoke(core.clj:730)
at kipsu.web$db.invoke(web.clj:118)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3623)
... 30 more
Exception in thread "main" java.lang.NullPointerException, compiling (web.clj:124:29)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3628)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3622)
at clojure.lang.Compiler$BodyExpr.eval(Compiler.java:5879)
at clojure.lang.Compiler$DefExpr.eval(Compiler.java:439)
at clojure.lang.Compiler.compile1(Compiler.java:7323)
at clojure.lang.Compiler.compile(Compiler.java:7390)
at clojure.lang.RT.compile(RT.java:399)
at clojure.lang.RT.load(RT.java:444)
at clojure.lang.RT.load(RT.java:412)
at clojure.core$load$fn__5448.invoke(core.clj:5866)
at clojure.core$load.doInvoke(core.clj:5865)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5671)
at clojure.core$compile$fn__5453.invoke(core.clj:5877)
at clojure.core$compile.invoke(core.clj:5876)
at user$eval9$fn__16.invoke(form-init1768231915654429312.clj:1)
at user$eval9.invoke(form-init1768231915654429312.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6782)
at clojure.lang.Compiler.eval(Compiler.java:6772)
at clojure.lang.Compiler.load(Compiler.java:7227)
at clojure.lang.Compiler.loadFile(Compiler.java:7165)
at clojure.main$load_script.invoke(main.clj:275)
at clojure.main$init_opt.invoke(main.clj:280)
at clojure.main$initialize.invoke(main.clj:308)
at clojure.main$null_opt.invoke(main.clj:343)
at clojure.main$main.doInvoke(main.clj:421)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:383)
at clojure.lang.AFn.applyToHelper(AFn.java:156)
at clojure.lang.Var.applyTo(Var.java:700)
at clojure.main.main(main.java:37)
Caused by: java.lang.NullPointerException
at kipsu.web$fn__4568.invoke(web.clj:115)
at clojure.lang.Delay.deref(Delay.java:37)
at clojure.lang.Delay.force(Delay.java:27)
at clojure.core$force.invoke(core.clj:730)
at kipsu.web$db.invoke(web.clj:118)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3623)
... 30 more
Compilation failed: Subprocess failed
I'm not the most fluent with Clojure and am working with this app to learn more. Any help for steering me at the right direction from here is greatly appreciated!
I think your problem is because def will execute at compile time. You've done the right thing in (def config) and (defn db), but (def app) is going to cause compile time errors if it cannot find your file. To understand why let's look at def.
(def hello (println "hello"))
If you try to compile this code you'll see "hello" printed to your console at compile time and at runtime the var hello will have the value nil.
(def hello (delay (println "hello"))
(def world #hello)
The var hello now won't get evaluated at compile time, but by introducing the var world we get the exact same problem.
Now back to your specific problem. You don't want your configuration to get read at compile time, and you don't want your configuration to have to read a file from disk every single time you need it.
Not reading your configuration at compile time makes me think that maybe it should be a function. If it is then a function you can simply use memoize to ensure it doesn't read from disk every time you call that function.
I'm seeing 100% utilisation of activemq's temp storage (configured to be 100mb), and the activemq client is blocking. This 100% usage remains permanently, and I have no idea what's going on
I have a camel route, which consumes from a queue (QUEUE.IN) using the JmsTransactionManager.
public final class RouteUnderTest extends RouteBuilder {
#Override
public void configure() throws Exception {
from("activemq-transacted:QUEUE.IN")
.bean(myBean)
.to("activemq:QUEUE.OUT");
}
}
While processing the message from this queue I'm invoking a spring-integration client (myBean) which is configured as follows
<int:gateway id="myBean" service-interface="MyBean">
<int:method name="request" request-channel="channel"/>
</int:gateway>
<int:chain input-channel="channel">
<int:transformer ref="transformedToJsonHere"/>
<jms:outbound-gateway request-destination-name="QUEUE.MYBEAN"
receive-timeout="5000"
explicit-qos-enabled="true"
time-to-live="5000"
delivery-persistent="false"/>
<int:transformer ref="transformedToAnObjectHere"/>
</int:chain>
My broker is configured to use LevelDB, and with the following usage limits:
<persistenceAdapter>
<levelDB directory="${activemq.data}/leveldb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="500 mb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="100 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>
When my route consumes the message and then attempts to put a non-persistent message on QUEUE.OUT the client is blocked and my broker shows 100% usage of temp storage.
And I see the following activemq logs
2015-07-28 15:44:59,678 | INFO | Usage(default:temp:queue://QUEUE.MYBEAN:temp) percentUsage=0%, usage=104857600, limit=104857600, percentUsageMinDelta=1%;Parent:Usage(default:temp) percentUsage=100%, usage=104857600, limit=104857600, percentUsageMinDelta=1%: Temp Store is Full (0% of 104857600). Stopping producer (ID:orbit-vm-55561-1438094698190-1:1:3:1) to prevent flooding queue://QUEUE.MYBEAN. See http://activemq.apache.org/producer-flow-control.html for more info (blocking for: 1s) | org.apache.activemq.broker.region.Queue | ActiveMQ NIO Worker 6
The queues look like (You can see that the QUEUE.IN message has been not been dequeued because it's still being processed transactionally, and no message has gone to QUEUE.MYBEAN)
I can fix this problem with any one of the following approaches:
Use KahaDB instead of LevelDB
Increase temp storage limit (150MB seems to do it but I haven't experimented a great deal)
Configure tempDataStore in activemq.xml (see below)
When configuring the tempDataStore it looks like:
<tempDataStore>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.leveldb.LevelDBStore">
<property name="directory" value="${activemq.data}/tmp" />
</bean>
</tempDataStore>
I should add, we were using KahaDB previously and this worked fine, but the upgrade to LevelDB has exposed this issue. Reverting to KahaDB is not an option.
I'm hoping someone could explain what we're seeing here, as the results are really difficult to understand. Why does using LevelDB necessitate a higher temp usage limit?, and why does configuring the tempDataStore explicitly also fix the problem?
I don't fully understand what's going on here so I'm worried that simply increasing the temp usage limit a little will just hide the problem until a later date.
Versions:
ActiveMQ: 5.11.1
Camel: 2.14.0
Spring: 4.0.8.RELEASE
Spring Integration: 4.0.5.RELEASE
We ran into exactly the same issue with ActiveMQ 5.13.2
The solution when using LevelDB is to explicitly configure a dedicated tempDataStore as you did.
If not, the broker uses the same store (LevelDB) for both persistent (persistent usage) and non-persistent messages (temp usage). You may therefore end-up in situations where the broker doesn't accept any non-persistent message anymore just because the store already holds persistent ones up to the configured tempUsage limit. It will however accept persistent ones if your storeUsage limit is set higher...
When using KahaDB, the broker automatically uses another store for the non-persistent messages (created in the tmp directory). So you don't have the problem...
Look at the following code for more indepth information: https://github.com/apache/activemq/blob/activemq-5.13.2/activemq-broker/src/main/java/org/apache/activemq/broker/BrokerService.java#L1739
When reading that code, remember LevelDBStore implements PListStore, but KahaDBStore doesn't...