Background
In my database I have some uniqueness constraints. If the data breaks one of this conditions, I get an error message like Violation of UNIQUE KEY constraint.
I use tryCatch in my code, to capture this error and return a meaningful message to the user. So far so good.
However, if I try to run any new transaction on the server after having captured this error, I get another error message saying that I Cannot begin a nested transaction.
My findings
I traced the error down, and I figured that when dbRollback is called (either explicitly, or within withTransaction) one cannot submit any new dbBegin anymore (either explicitly or implicitly via dbWriteTable and friends).
What I need to get unstuck, is to run a dbCommit, to be allowed to run another dbBegin.
Looking at the code of dbCommit and dbRollback I see that in the former case
setAutoCommit is set to true, which signals dbBegin that we are not nesting transactions. This is not the case for dbRollback:
getMethod("dbCommit", "SQLServerConnection")
# Method Definition:
#
# function (conn, ...)
# {
# rJava::.jcall(conn#jc, "V", "commit")
# rJava::.jcall(conn#jc, "V", "setAutoCommit", TRUE)
# TRUE
# }
# <environment: namespace:RSQLServer>
getMethod("dbRollback", "SQLServerConnection")
# Method Definition:
#
# function (conn, ...)
# {
# rJava::.jcall(conn#jc, "V", "rollback")
# TRUE
# }
# <environment: namespace:RSQLServer>
Question
So my question is: is this the supposed behavior? That is, am I suppose to run a manual dbCommit after an operation was rolled back, or is this a bug?
Code
library(DBI)
library(RSQLServer)
db <- dbConnect(...)
dbBegin(db)
dbCommit(db)
dbBegin(db) # works
dbRollback(db)
dbBegin(db) # does not work
dbCommit(db) # my workaround
dbBegin(db) # works again
Related
I'm following https://github.com/thomashoneyman/real-world-pact/ to deploy my contract on local devnet.
I've updated the deployment script as
const deployK = async () => {
const detailArgs = ["--local", "k-contract-details"];
const contractDetails = await parseArgs(detailArgs).then(runRequest);
if (contractDetails.status === "failure") {
console.log(
"K contract not found on local Chainweb node. Deploying contract..."
);
const deployArgs = [
"--send",
"deploy-k-contract",
"--signers",
"kazora",
];
const deployResult = await parseArgs(deployArgs).then(runRequest);
if (deployResult.status === "success") {
console.log(`Deployed! Cost: ${deployResult.gas} gas.`);
} else {
throw new Error(
`Failed to deploy contract: ${JSON.stringify(
deployResult.error,
null,
2
)}`
);
}
}
};
The deploy-k-contracty.yaml is
# This YAML file describes a transaction that, when executed, will deploy the
# faucet contract to Chainweb.
#
# To execute this request (you must have funded the faucet account):
# faucet-request --send deploy-faucet-contract --signers k
#
# Alternately, to fund the faucet account _and_ deploy the contract:
# faucet-deploy
networkId: "development"
type: "exec"
# To deploy our contract we need to send its entire contents to Chainweb as a
# transaction. When a Chainweb node receives a module it will attempt to
# register it in the given namespace.
codeFile: "../../k.pact"
# The 'data' key is for JSON data we want to include with our transaction. As a
# general rule, any use of (read-msg) or (read-keyset) in your contract
# indicates data that must be included here.
#
# Our contract reads the transaction data twice:
# - (read-keyset "k-keyset")
# - (read-msg "upgrade")
data:
k-admin-keyset:
# On deployment, our contract will register a new keyset on Chainweb named
# 'k-keyset. We'll use this keyset to govern the faucet
# contract, which means the contract can only be upgraded by this keyset.
#
# We want the contract to be controlled by our faucet account, which means
# our keyset should assert that the k.yaml keys were used to
# sign the transaction. The public key below is from the k.yaml
# key pair file.
keys:
- "1b54c9eac0047b10f7f6a6f270f7156fb519ef02c9bb96dc28a4e50c48a468f4"
pred: "keys-all"
# Next, our contract looks for an 'upgrade' key to determine whether it should
# initialize data (for example, whether it should create tables). This request
# deploys the contract, so we'll set this to false.
upgrade: false
signers:
# We need the Goliath faucet account to sign the transaction, because we want
# the faucet to deploy the contract. This is the Goliath faucet public key. It
# should match the keyset above.
- public: "1b54c9eac0047b10f7f6a6f270f7156fb519ef02c9bb96dc28a4e50c48a468f4"
publicMeta:
# The faucet contract only works on chain 0, so that's where we'll deploy it.
chainId: "0"
# The contract should be deployed by the faucet account, which means the
# faucet account is responsible for paying the gas for this transaction. You
# must have used the 'fund-faucet-account.yaml' request to fund the faucet
# account before you can use this deployment request file.
sender: "k"
# To determine the gas limit for most requests you can simply execute the Pact
# code in the REPL, use (env-gaslog) to measure consumption, and round up the
# result. However, deployment is different; you can't simply measure a call to
# (load "faucet.pact") as it will provide an inaccurate measure.
#
# Instead, I first set the gas limit to 150000 (the maximum) and deploy the
# contract to our local simulation Chainweb. Then, I recorded the gas
# consumption that the node reported and round it up.
gasLimit: 65000
gasPrice: 0.0000001
ttl: 600
It complains about validate-principal function, however it's defined as pact built-in function.
https://pact-language.readthedocs.io/en/stable/pact-functions.html?highlight=validate-principal#validate-principal
./kazora/run-deploy-contract.js
-----
executing 'local' request: kazora-details.yaml
-----
Kazora account 1b54c9eac0047b10f7f6a6f270f7156fb519ef02c9bb96dc28a4e50c48a468f4 found with 999.9935 in funds.
-----
executing 'local' request: kazora-contract-details.yaml
-----
Kazora contract not found on local Chainweb node. Deploying contract...
-----
executing 'send' request: deploy-kazora-contract.yaml
-----
Received request key: vm4O3YKKj7Ea9nR8D8nPSHuVI7OtHPJzQjk7RA7XZLI
Sending POST request with request key to /poll endpoint.
May take up to 1 minute and 30 seconds to be mined into a block.
Polling every 5 seconds until the transaction has been processed...
Waiting (15 seconds elapsed)...
Waiting (30 seconds elapsed)...
Waiting (45 seconds elapsed)...
/home/ripple/git/web3/kazora/run-deploy-contract.js:66
throw new Error(
^
Error: Failed to deploy contract: {
"callStack": [
"<interactive>:0:102: module"
],
"type": "EvalError",
"message": "Cannot resolve \"validate-principal\"",
"info": "<interactive>:0:8052"
}
at deployKazora (/home/ripple/git/web3/kazora/run-deploy-contract.js:66:13)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async main (/home/ripple/git/web3/kazora/run-deploy-contract.js:81:3)
Make sure you are using version 4.3.1 of pact or later.
The build-in function was only added at that point:
https://github.com/kadena-io/pact/releases/tag/v4.3.1
When I look at the logs in the Google Log Viewer for my GAE project, I see that often the logs that I write myself in the code are assigned to the wrong request. Most of the time the log is assigned to the request directly after the request that produced the log entry.
As the root of every application log in GAE must be a request, this means that the wrong request is sometimes marked as error, because another request before produced an error, but the log is somehow assigned to the request after that.
I don't really do anything special, I use Ktor as my servlet and have an interceptor that creates a log when an exception occurs before returning status 500.
I use Java logging via SLF4J with the google cloud logging handler, but before that I used logback via SLf4J and had the same problem.
The content of the logs itself is also correct, the returned status of the request, the level of the log entry, the message, everything is ok.
I thought that it may be because I use kotlin and switch coroutine contexts during a single request, but in some cases the point where I write the log and where I send the response are exactly next to each other, so I'm not sure if kotlin has anything to do with it.
My logging.properties:
# To use this configuration, add to system properties : -Djava.util.logging.config.file="/path/to/file"
#
.level = INFO
# it is recommended that io.grpc and sun.net logging level is kept at INFO level,
# as both these packages are used by Stackdriver internals and can result in verbose / initialization problems.
io.grpc.netty.level=INFO
sun.net.level=INFO
handlers=com.google.cloud.logging.LoggingHandler
# default : java.log
com.google.cloud.logging.LoggingHandler.log=custom_log
# default : INFO
com.google.cloud.logging.LoggingHandler.level=INFO
# default : ERROR
com.google.cloud.logging.LoggingHandler.flushLevel=WARNING
# default : auto-detected, fallback "global"
#com.google.cloud.logging.LoggingHandler.resourceType=container
# custom formatter
com.google.cloud.logging.LoggingHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS %4$-6s %2$s %5$s%6$s%n
#optional enhancers (to add additional fields, labels)
#com.google.cloud.logging.LoggingHandler.enhancers=com.example.logging.jul.enhancers.ExampleEnhancer
My logging relevant dependencies:
implementation "org.slf4j:slf4j-jdk14:1.7.30"
implementation "com.google.cloud:google-cloud-logging:1.100.0"
An example logging call:
exception<Throwable> { e ->
logger().error("Error", e)
call.respondText(e.message ?: "", ContentType.Text.Plain, HttpStatusCode.InternalServerError)
}
with logger() being:
import org.slf4j.Logger
import org.slf4j.LoggerFactory
inline fun <reified T : Any> T.logger(): Logger = LoggerFactory.getLogger(T::class.java)
Edit:
An example of the log in Google cloud. The first request has the query parameter GAID=cdda802e-fb9c-47ad-0794d394c913, but as you can see the error log for that request is in the one below, marked in red.
I am trying to delete a view by following instructions here: Not able to delete 2sxc view and here: http://2sxc.org/en/blog/post/advanced-dynamic-data-content-understanding-content-type-scopes
I can get to the 2SexyContent-System scope without difficulty but when I try the 2SexyContent-ContentGroup I get a Server 500. 2SexyContent-Template and the other types work just fine. Error is below.
Note I realize the ContentGroup may not even be helpful to me. I do understand Template is where I could ForceDelete but I would like to avoid that. I was hoping ContentGroup might help me locate the parent entities that I need to remove. We have a large site and use 2sxc a lot so I am trying to discover the best way to find these parents and delete them in a healthy way.
Message: Had an error talking to the server (status 500).
Detail: The 'ObjectContent`1' type failed to serialize the response body for content 'application/json;charset=utf-8'.
Get [server name removed]/en-us/desktopmodules/2sxc/api/eav/entities/GetAllOfTypeForAdmin?appId=2&contentType=2SexyContent-ContentGroup 500 (Internal Server Error)
(anonymous) # VM10361:2
(anonymous) # set.min.js?sxcver=8.9.1.13916:103
n # set.min.js?sxcver=8.9.1.13916:99
(anonymous) # set.min.js?sxcver=8.9.1.13916:96
(anonymous) # set.min.js?sxcver=8.9.1.13916:131
$eval # set.min.js?sxcver=8.9.1.13916:145
$digest # set.min.js?sxcver=8.9.1.13916:142
$apply # set.min.js?sxcver=8.9.1.13916:146
(anonymous) # set.min.js?sxcver=8.9.1.13916:276
Sf # set.min.js?sxcver=8.9.1.13916:37
d # set.min.js?sxcver=8.9.1.13916:37
The Content-Group is almost certainly the wrong way to do this, because the ContentGroup is the subset of items assigned to the current module/instance.
If you're trying to support force-delete, best check out how the admin-UI does it.
I have built a pipeline on AppEngine that loads data from Cloud Storage to BigQuery. This works fine, ..until there is any error. How can I can loading exceptions by BigQuery from my AppEngine code?
The code in the pipeline looks like this:
#Run the job
credentials = AppAssertionCredentials(scope=SCOPE)
http = credentials.authorize(httplib2.Http())
bigquery_service = build("bigquery", "v2", http=http)
jobCollection = bigquery_service.jobs()
result = jobCollection.insert(projectId=PROJECT_ID,
body=build_job_data(table_name, cloud_storage_files))
#Get the status
while (not allDone and not runtime.is_shutting_down()):
try:
job = jobCollection.get(projectId=PROJECT_ID,
jobId=insertResponse).execute()
#Do something with job.get('status')
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
logging.error(traceback.format_exception(exc_type, exc_value, exc_traceback))
time.sleep(30)
This gives me status error, or major connectivity errors, but what I am looking for is functional errors from BigQuery, like fields formats conversion errors, schema structure issues, or other issues BigQuery may have while trying to insert rows to tables.
If any "functional" error on BigQuery's side happens, this code will run successfully and complete normally, but no table will be written on BigQuery. Not easy to debug when this happens...
You can use the HTTP error code from the exception. BigQuery is a REST API, so the response codes that are returned match the description of HTTP error codes here.
Here is some code that handles retryable errors (connection, rate limit, etc), but re-raises when it is an error type that it doesn't expect.
except HttpError, err:
# If the error is a rate limit or connection error, wait and
# try again.
# 403: Forbidden: Both access denied and rate limits.
# 408: Timeout
# 500: Internal Service Error
# 503: Service Unavailable
if err.resp.status in [403, 408, 500, 503]:
print '%s: Retryable error %s, waiting' % (
self.thread_id, err.resp.status,)
time.sleep(5)
else: raise
If you want even better error handling, check out the BigqueryError class in the bq command line client (this used to be available on code.google.com, but with the recent switch to gCloud, it isn't any more. But if you have gcloud installed, the bq.py and bigquery_client.py files should be in the installation).
The key here is this part of the pasted code:
except:
exc_type, exc_value, exc_traceback = sys.exc_info()
logging.error(traceback.format_exception(exc_type, exc_value, exc_traceback))
time.sleep(30)
This "except" is catching every exception, logging it, and letting the process continue without any consideration for re-trying.
The question is, what would you like to do instead? At least the intention is there with the "#Do something" comment.
As a suggestion, consider App Engine's task queues to check the status, instead of a loop with a 30 second wait. When tasks get an exception, they are automatically retried - and you can tune that behavior.
When I do a filter on a ForeignKey field with __isnull=True, this exception is raised:
DatabaseError: This query is not supported by the database.
However, __isnull=False on ForeignKey works as long as there are no other inequality filters (which I would expect). And __isnull=True works for other field types.
So why does __isnull=True not work on ForeignKey? It seems that DBIndexer tries to make it work as shown here:
https://github.com/django-nonrel/django-dbindexer/blob/dbindexer-1.4/dbindexer/backends.py
But then there is an exception in djangotoolbox:
File "/Users//Documents/workspace/-gae-dev/src/django/db/models/query.py", line 107, in _result_iter
self._fill_cache()
File "/Users//Documents/workspace/-gae-dev/src/django/db/models/query.py", line 774, in _fill_cache
self._result_cache.append(self._iter.next())
File "/Users//Documents/workspace/-gae-dev/src/django/db/models/query.py", line 275, in iterator
for row in compiler.results_iter():
File "/Users//Documents/workspace/-gae-dev/src/djangotoolbox/db/basecompiler.py", line 337, in results_iter
results = self.build_query(fields).fetch(
File "/Users//Documents/workspace/-gae-dev/src/djangotoolbox/db/basecompiler.py", line 428, in build_query
self.check_query()
File "/Users//Documents/workspace/-gae-dev/src/djangotoolbox/db/basecompiler.py", line 409, in check_query
raise DatabaseError("This query is not supported by the database.")
I did come across the following commented-out test case in djangoappengine, and am wondering if is referring to the same issue?
def test_is_null(self):
self.assertEquals(FieldsWithOptionsModel.objects.filter(
floating_point__isnull=True).count(), 0)
FieldsWithOptionsModel(
integer=5.4, email='shinra.tensai#sixpaths.com',
time=datetime.datetime.now().time()).save()
self.assertEquals(FieldsWithOptionsModel.objects.filter(
floating_point__isnull=True).count(), 1)
# XXX: These filters will not work because of a Django bug.
# self.assertEquals(FieldsWithOptionsModel.objects.filter(
# foreign_key=None).count(), 1)
# (it uses left outer joins if checked against isnull)
# self.assertEquals(FieldsWithOptionsModel.objects.filter(
# foreign_key__isnull=True).count(), 1)
Alex Burgel on the NonRel project set me straight:
The NonRel/dbindexer project fixes this query (which otherwise doesn't work due to this Django bug: https://code.djangoproject.com/ticket/10790). To set up dbindexer:
of course, add it to INSTALLED_APPS
also in settings.py, set DATABASES['default]['ENGINE']= 'dbindexer'
also in settings.py, set DBINDEXER_BACKENDS to use FKNullFix. For example:
DBINDEXER_BACKENDS = (
'dbindexer.backends.BaseResolver',
'dbindexer.backends.FKNullFix',
'dbindexer.backends.InMemoryJOINResolver',
'dbindexer.backends.ConstantFieldJOINResolver',
)