I am pulling in a large database with some large tables (over 80 million rows) that are on Microsoft SQL Server.
I am experiencing weird behaviour where the table loads start reloading after around 60 minutes (or a few minutes over that). I am doing a full load + cdc and this problem occurs on the full load part.
This is how my setting looks like:
{
"TargetMetadata": {
"TargetSchema": "",
"SupportLobs": true,
"FullLobMode": false,
"LobChunkSize": 0,
"LimitedSizeLobMode": true,
"LobMaxSize": 102400,
"InlineLobMaxSize": 0,
"LoadMaxFileSize": 0,
"ParallelLoadThreads": 0,
"ParallelLoadBufferSize": 0,
"BatchApplyEnabled": false,
"TaskRecoveryTableEnabled": false
},
"FullLoadSettings": {
"TargetTablePrepMode": "DROP_AND_CREATE",
"CreatePkAfterFullLoad": true,
"StopTaskCachedChangesApplied": false,
"StopTaskCachedChangesNotApplied": false,
"MaxFullLoadSubTasks": 16,
"TransactionConsistencyTimeout": 600,
"CommitRate": 50000
},
"Logging": {
"EnableLogging": true,
"LogComponents": [
{
"Id": "SOURCE_UNLOAD",
"Severity": "LOGGER_SEVERITY_DEFAULT"
},
{
"Id": "TARGET_LOAD",
"Severity": "LOGGER_SEVERITY_DEFAULT"
},
{
"Id": "SOURCE_CAPTURE",
"Severity": "LOGGER_SEVERITY_DEFAULT"
},
{
"Id": "TARGET_APPLY",
"Severity": "LOGGER_SEVERITY_DEFAULT"
},
{
"Id": "TASK_MANAGER",
"Severity": "LOGGER_SEVERITY_DEFAULT"
}
]
},
"ControlTablesSettings": {
"historyTimeslotInMinutes": 5,
"ControlSchema": "",
"HistoryTimeslotInMinutes": 5,
"HistoryTableEnabled": false,
"SuspendedTablesTableEnabled": false,
"StatusTableEnabled": false
},
"StreamBufferSettings": {
"StreamBufferCount": 3,
"StreamBufferSizeInMB": 8,
"CtrlStreamBufferSizeInMB": 5
},
"ChangeProcessingDdlHandlingPolicy": {
"HandleSourceTableDropped": true,
"HandleSourceTableTruncated": true,
"HandleSourceTableAltered": true
},
"ErrorBehavior": {
"DataErrorPolicy": "LOG_ERROR",
"DataTruncationErrorPolicy": "LOG_ERROR",
"DataErrorEscalationPolicy": "SUSPEND_TABLE",
"DataErrorEscalationCount": 0,
"TableErrorPolicy": "SUSPEND_TABLE",
"TableErrorEscalationPolicy": "SUSPEND_TABLE",
"TableErrorEscalationCount": 0,
"RecoverableErrorCount": -1,
"RecoverableErrorInterval": 5,
"RecoverableErrorThrottling": true,
"RecoverableErrorThrottlingMax": 1800,
"ApplyErrorDeletePolicy": "IGNORE_RECORD",
"ApplyErrorInsertPolicy": "LOG_ERROR",
"ApplyErrorUpdatePolicy": "LOG_ERROR",
"ApplyErrorEscalationPolicy": "LOG_ERROR",
"ApplyErrorEscalationCount": 0,
"ApplyErrorFailOnTruncationDdl": false,
"FullLoadIgnoreConflicts": true,
"FailOnTransactionConsistencyBreached": false,
"FailOnNoTablesCaptured": false
},
"ChangeProcessingTuning": {
"BatchApplyPreserveTransaction": true,
"BatchApplyTimeoutMin": 1,
"BatchApplyTimeoutMax": 30,
"BatchApplyMemoryLimit": 500,
"BatchSplitSize": 0,
"MinTransactionSize": 1000,
"CommitTimeout": 1,
"MemoryLimitTotal": 1024,
"MemoryKeepTime": 60,
"StatementCacheSize": 50
},
"PostProcessingRules": null,
"CharacterSetSettings": null,
"LoopbackPreventionSettings": null
}
I do not see anything useful in the logs on the Cloudwatch, except that the endpoint is disconnected. More logs below:
2022-08-21T13:34:59 [SOURCE_UNLOAD ]E: Endpoint is disconnected [1020414] (endpointshell.c:3807)
2022-08-21T13:34:59 [SOURCE_UNLOAD ]E: Error executing source loop [1020414] (streamcomponent.c:1872)
2022-08-21T13:34:59 [TASK_MANAGER ]E: Stream component failed at subtask 5, component st_5_STAX7TAMIRB2R3PAIOSXO6TO7KT6PPIVMPGYLNQ [1020414] (subtask.c:1414)
2022-08-21T13:34:59 [SOURCE_UNLOAD ]E: Stream component 'st_5_STAX7TAMIRB2R3PAIOSXO6TO7KT6PPIVMPGYLNQ' terminated [1020414] (subtask.c:1594)
2022-08-21T13:34:59 [TASK_MANAGER ]E: Task error notification received from subtask 5, thread 0 [1020414] (replicationtask.c:2880)
2022-08-21T13:34:59 [TASK_MANAGER ]E: Endpoint is disconnected; Error executing source loop; Stream component failed at subtask 5, component st_5_STAX7TAMIRB2R3PAIOSXO6TO7KT6PPIVMPGYLNQ; Stream component 'st_5_STAX7TAMIRB2R3PAIOSXO6TO7KT6PPIVMPGYLNQ' terminated [1020414] (replicationtask.c:2888)
Also, I noticed that DMS job tries to do this import 10 times and each time it takes around 1 hour to get this message and at the end it ends up in the FAILED stage.
We have tried to increase the timeout on the SQL server side (QueryTimeOut) to 2 hours, but that didn't help.
Can someone advice what else can be done here? Also, I would like for the DMS job not to "retry" for 10 times, since probably 3-4 times would be enough.
Related
I am getting "Unable to save connector configuration. Please try again." error when I try to save my connector on MS Teams.
Error details are below:
Received error from connectors
{
"seq": 1623959414107,
"timestamp": 1623959424578,
"flightSettings": {
"Name": "ConnectorFrontEndSettings",
"AriaSDKToken": "d127f72a3abd41c9b9dd94faca947689-d58285e6-3a68-4cab-a458-37b9d9761d35-7033",
"SPAEnabled": true,
"ClassificationFilterEnabled": true,
"ClientRoutingEnabled": true,
"EnableYammerGroupOption": true,
"EnableFadeMessage": false,
"EnableDomainBasedOwaConnectorList": false,
"EnableDomainBasedTeamsConnectorList": false,
"DevPortalSPAEnabled": true,
"ShowHomeNavigationButtonOnConfigurationPage": false,
"DisableConnectToO365InlineDeleteFeedbackPage": true
},
"status": 500,
"clientType": "SkypeSpaces",
"connectorType": "9891a151-05c2-4c8d-9064-aba9d928cf94",
"name": "handleMessageError"
}
I couldn't figure out the source of the problem. Any idea is appreciated.
I have solved the issue.
I used this tutorial provided by MS: https://learn.microsoft.com/en-us/learn/modules/msteams-webhooks-connectors/7-exercise-o365-connectors
Don't use it.
Instead use this sample: https://github.com/OfficeDev/Microsoft-Teams-Samples/tree/main/samples/connector-generic/nodejs
Iam new to geoserver. I have created shape file of my district and added certain attributes like covid count, covid zone , district name etc related to COVID . I have loaded this to postgis database and I could see attributes also in table .But when I try to retrieve the feature using postman . Attribute values are not retrieved. Can anyone help
Below is my request
http://localhost:8080/geoserver/rest/workspaces/DistrictWpc/datastores/district_store/featuretypes/ernakulam.json
Response is
{
"featureType": {
"name": "ernakulam",
"nativeName": "ernakulam",
"namespace": {
"name": "DistrictWpc",
"href": "http://localhost:8080/geoserver/rest/namespaces/DistrictWpc.json"
},
"title": "ernakulam",
"keywords": {
"string": [
"features",
"ernakulam"
]
},
"srs": "EPSG:404000",
"nativeBoundingBox": {
"minx": 76.1618881225586,
"maxx": 76.6080093383789,
"miny": 9.63820648193359,
"maxy": 10.1869020462036
},
"latLonBoundingBox": {
"minx": 76.1618881225586,
"maxx": 76.6080093383789,
"miny": 9.63820648193359,
"maxy": 10.1869020462036,
"crs": "EPSG:4326"
},
"projectionPolicy": "FORCE_DECLARED",
"enabled": true,
"store": {
"#class": "dataStore",
"name": "DistrictWpc:district_store",
"href": "http://localhost:8080/geoserver/rest/workspaces/DistrictWpc/datastores/district_store.json"
},
"serviceConfiguration": false,
"maxFeatures": 0,
"numDecimals": 0,
"padWithZeros": false,
"forcedDecimal": false,
"overridingServiceSRS": false,
"skipNumberMatched": false,
"circularArcPresent": false,
"attributes": {
"attribute": [
{
"name": "id",
"minOccurs": 0,
"maxOccurs": 1,
"nillable": true,
"binding": "java.lang.Long"
},
{
"name": "district",
"minOccurs": 0,
"maxOccurs": 1,
"nillable": true,
"binding": "java.lang.String"
},
{
"name": "count",
"minOccurs": 0,
"maxOccurs": 1,
"nillable": true,
"binding": "java.lang.Long"
},
{
"name": "zone",
"minOccurs": 0,
"maxOccurs": 1,
"nillable": true,
"binding": "java.lang.String"
},
{
"name": "geom",
"minOccurs": 0,
"maxOccurs": 1,
"nillable": true,
"binding": "org.locationtech.jts.geom.MultiPolygon"
}
]
}
}
}
GeoServer's REST API is used for administrative tasks and as such does not provide a way to see the actual data you have stored in the database, just the details of how GeoServer is connecting to the database and some metadata about the store.
To access the actual data you need to use the WFS endpoint which is described by the OGC WES Specification and described in the GeoServer manual.
If you must have REST access to the features you could use the experimental OGC Features API module to do this.
I'm trying to set my first clightning node with docker-compose using image from https://hub.docker.com/r/elementsproject/lightningd. Currently, my node can connect and open channel with other nodes (and I can open a channel to the node just fine), but it's still not updated (ie. has no information) on most explorers.
I've tried to make my port 9735 open, set the bind-addr as the docker's IP address, even tried to set announce-addr with tor addresses. Nothing works.
The following is current results of getinfo and listconfigs:
getinfo
{
"id": "03db40337c2de299a8fa454fdf89d311615d50a27129d43286696d9e497b2b027a",
"alias": "TestName",
"color": "fff000",
"num_peers": 3,
"num_pending_channels": 0,
"num_active_channels": 3,
"num_inactive_channels": 0,
"address": [
{
"type": "ipv4",
"address": "68.183.195.14",
"port": 9735
}
],
"binding": [
{
"type": "ipv4",
"address": "172.18.0.3",
"port": 9735
}
],
"version": "v0.7.1-906-gf657146",
"blockheight": 601917,
"network": "bitcoin",
"msatoshi_fees_collected": 0,
"fees_collected_msat": "0msat"
}
listconfigs
{
"# version": "v0.7.1-906-gf657146",
"lightning-dir": "/root/.lightning",
"wallet": "sqlite3:///root/.lightning/lightningd.sqlite3",
"plugin": "/usr/local/bin/../libexec/c-lightning/plugins/pay",
"plugin": "/usr/local/bin/../libexec/c-lightning/plugins/autoclean",
"plugin": "/usr/local/bin/../libexec/c-lightning/plugins/fundchannel",
"network": "bitcoin",
"allow-deprecated-apis": true,
"always-use-proxy": false,
"daemon": "false",
"rpc-file": "lightning-rpc",
"rgb": "fff000",
"alias": "HubTest",
"bitcoin-rpcuser": [redacted],
"bitcoin-rpcpassword": [redacted],
"bitcoin-rpcconnect": "bitcoind",
"bitcoin-retry-timeout": 60,
"pid-file": "lightningd-bitcoin.pid",
"ignore-fee-limits": false,
"watchtime-blocks": 144,
"max-locktime-blocks": 2016,
"funding-confirms": 3,
"commit-fee-min": 200,
"commit-fee-max": 2000,
"commit-fee": 500,
"cltv-delta": 14,
"cltv-final": 10,
"commit-time": 10,
"fee-base": 0,
"rescan": 15,
"fee-per-satoshi": 1,
"max-concurrent-htlcs": 30,
"min-capacity-sat": 10000,
"bind-addr": "172.18.0.3:9735",
"announce-addr": "68.183.195.14:9735",
"offline": "false",
"autolisten": true,
"disable-dns": "false",
"enable-autotor-v2-mode": "false",
"encrypted-hsm": false,
"log-level": "DEBUG",
"log-prefix": "lightningd(7):"
}
Is there something wrong with this configuration? Or, is it another issue after all?
I understand that explorers update their node list irregularly, and as far as the node can open channels (and can be connected), everything is fine. but this thing has bugging me for weeks.
updated the docker image and put bind-addr to 0.0.0.0:9375 somehow fixed the problem for some unknown reason.
I need FEMMES.COM to get tokenized as singular + plural forms of the base word FEMME.
Custom Analyzer Config
"analyzers": [ { "#odata.type": "#Microsoft.Azure.Search.CustomAnalyzer", "name": "text_language_search_custom_analyzer", "tokenizer": "text_language_search_custom_analyzer_ms_tokenizer", "tokenFilters": [ "lowercase", "asciifolding" ], "charFilters": [ "html_strip" ] } ], "tokenizers": [ { "#odata.type": "#Microsoft.Azure.Search.MicrosoftLanguageStemmingTokenizer", "name": "text_language_search_custom_analyzer_ms_tokenizer", "maxTokenLength": 300, "isSearchTokenizer": false, "language": "english" } ], "tokenFilters": [], "charFilters": []}
Analyze API call for FEMMES
{ "analyzer": "text_language_search_custom_analyzer", "text": "FEMMES" }
Analyze API response for FEMMES
{ "#odata.context": "https://one-adscope-search-eu-stage.search.windows.net/$metadata#Microsoft.Azure.Search.V2016_09_01.AnalyzeResult", "tokens": [ { "token": "femme", "startOffset": 0, "endOffset": 6, "position": 0 }, { "token": "femmes", "startOffset": 0, "endOffset": 6, "position": 0 } ] }
Analyze API response for FEMMES.COM
{ "#odata.context": "https://one-adscope-search-eu-stage.search.windows.net/$metadata#Microsoft.Azure.Search.V2016_09_01.AnalyzeResult", "tokens": [ { "token": "femmes", "startOffset": 0, "endOffset": 6, "position": 0 }, { "token": "com", "startOffset": 7, "endOffset": 10, "position": 1 } ] }
Analyze API response for FEMMES COM
{ "#odata.context": "https://one-adscope-search-eu-stage.search.windows.net/$metadata#Microsoft.Azure.Search.V2016_09_01.AnalyzeResult", "tokens": [ { "token": "femme", "startOffset": 0, "endOffset": 6, "position": 0 }, { "token": "femmes", "startOffset": 0, "endOffset": 6, "position": 0 }, { "token": "com", "startOffset": 7, "endOffset": 10, "position": 1 } ]}
I think I figured this one out myself after some experimentation. I found the MappingCharFilter could be used to replace . with , before the indexer did the tokenization. This allowed the lemmatization/stemming to work as expected on the terms in question. I need to do more thorough integration tests with our other use cases, but I think this would solve the problem for anybody facing the same type of issue.
My previous answer was not correct. Azure Search implementation actually applies the language tokenizer BEFORE token filters. This essentially made the WordDelimiterToken filter useless in my use case.
What I ended up having to do was to pre-process data BEFORE I uploaded to Azure for indexing. In my C# code, I added some regex logic that would break apart text like FEMMES2017 into FEMMES 2017, before I sent it to Azure. This way, when the text got to Azure, the indexer would see FEMMES by itself and properly tokenize as FEMME and FEMMES using the language tokenizer.
I'm saving an appointment to an AppointmentCalendar using AppointmentCalendar.SaveAppointmentAsync(...). This appointment is the master of a series containing Recurrence information.
Right after saving the appointment I retrieve the very same appointment again by calling GetAppointmentAsync on the same calendar using the appointment's LocalId.
Here is the unexpected behavior: there is a difference in the loaded appointment: the Recurrence.Until date is off by one.
Why is this?
Here are the involved appointments serialized as JSON :
The Appointment I save:
{
"Location": "",
"AllDay": false,
"Organizer": null,
"Duration": "00:45:00",
"Details": "",
"BusyStatus": 0,
"Recurrence": {
"Unit": 1,
"Occurrences": 260,
"Month": 1,
"Interval": 1,
"DaysOfWeek": 62,
"Day": 1,
"WeekOfMonth": 0,
"Until": "2016-12-31T01:00:00+01:00",
"TimeZone": "Europe/Budapest",
"RecurrenceType": 0,
"CalendarIdentifier": ""
},
"Subject": "test",
"Uri": null,
"StartTime": "2016-01-04T11:30:00+01:00",
"Sensitivity": 0,
"Reminder": null,
"Invitees": {},
"AllowNewTimeProposal": true,
"UserResponse": 0,
"RoamingId": "c,b,fd",
"ReplyTime": null,
"IsResponseRequested": true,
"IsOrganizedByUser": false,
"IsCanceledMeeting": false,
"OnlineMeetingLink": "",
"HasInvitees": false,
"CalendarId": "b,37,355",
"LocalId": "c,37,20a3",
"OriginalStartTime": null,
"RemoteChangeNumber": 0,
"DetailsKind": 0,
"ChangeNumber": 39537577
}
And here is this very same Appointment after retrieving it by calling GetAppointmentAsync:
{
"Location": "",
"AllDay": false,
"Organizer": null,
"Duration": "00:45:00",
"Details": "",
"BusyStatus": 0,
"Recurrence": {
"Unit": 1,
"Occurrences": 260,
"Month": 1,
"Interval": 1,
"DaysOfWeek": 62,
"Day": 1,
"WeekOfMonth": 0,
"Until": "2016-12-30T01:00:00+01:00",
"TimeZone": "Europe/Budapest",
"RecurrenceType": 0,
"CalendarIdentifier": "GregorianCalendar"
},
"Subject": "test",
"Uri": null,
"StartTime": "2016-01-04T11:30:00+01:00",
"Sensitivity": 0,
"Reminder": null,
"Invitees": {},
"AllowNewTimeProposal": true,
"UserResponse": 0,
"RoamingId": "c,b,fd",
"ReplyTime": null,
"IsResponseRequested": true,
"IsOrganizedByUser": false,
"IsCanceledMeeting": false,
"OnlineMeetingLink": "",
"HasInvitees": false,
"CalendarId": "b,37,355",
"LocalId": "c,37,20a3",
"OriginalStartTime": null,
"RemoteChangeNumber": 0,
"DetailsKind": 0,
"ChangeNumber": 39537577
}
Diffing these JSONs you get two differences in the Recurrence part:
CalendarIdentifier is empty in the original appointment to save (because the setter is private). But more important: Recurrence.Until differs!
Recurrence.Until for appointment to save: "2016-12-31T01:00:00+01:00"
Recurrence.Until for appointment after loading: "2016-12-30T01:00:00+01:00"
One day is missing.
Why is this? Is there anything else I need to do when saving the appointment? Or worse: Is it just an edge case with my calendars and appointments, maybe even connected to the current date?
(SDK Version 10.0.14393.0, Win 10 Anniversary)
I did many testing on my side. In my testing if I set the time of the until day before 8:00 AM, the result of the recurrence.untilwill off by one day as you showed above, setting other time will got the right day but no matter what time you actually set the result you got will be 8:00 AM. Details please see the following test result.
It seems to relative to the timezone(my timezone is UTC+8:00). The simple workaround is when you setting the until, just set the date to it like follows: recurrence.Until = UntilDatePicker.Date;, don't set the specific time to the until. Actually we don't need the specific time when setting the until even we set it manually in calendar app.
I also upload a demo you can download for further testing.