Does google pub/sub have concept of partition like kafka? - google-cloud-pubsub

What is the partition strategy if n subscriber is sharing 1 subscriptions for a topic in google pub/sub?
Is it round robin? Or does each subscriber guarantee to have a set of key if there is no rebalancing?

You can create several subscriptions on a same topic with a filter on the message attributes to only select a subset of messages.
You can't set partition based on the subscribers of a same subscription.

PubSub lite has a concept of partitions it seems.
https://cloud.google.com/pubsub/lite/docs/topics#provisioning_capacity
https://cloud.google.com/pubsub/lite/docs/subscribing#receiving_messages
To receive messages from a Lite subscription, request messages from the Lite subscription. The client library automatically connects to the partitions in the Lite topic attached to the Lite subscription. If more than one subscriber client is instantiated, messages will be distributed across all clients. The number of partitions in the topic determines the maximum number of subscriber clients that can simultaneously connect to a subscription.

Related

Is there a way to raise SNOW ticket as notification for query failures in snowflake?

I was going through the integration documents available for snowflake & service now. But, all documents are oddly focussed on sf consuming snow data for analytics. Didn't find anything related to creating tickets for failures at snowflake. Is it possible?
It's not about the monitoring & notification aspect of snowflake but connecting with service now and raise a ticket for query failures (tasks,sp etc.)
Any ideas?
There's no functionality like that as of now. I can recommend you open an Idea for it and if enough customers want it our Product Management will review it.
For the Snowpipe, we found a way to use it. We send the error message to SNS and then we can do a Lambda function to call the Rest API of ServiceNow to create a ticket.
For Task, we find that it is possible to use External Functions to notify to AWS whenever the Task fails, but we haven’t implemented it.
Email is a simple way. You need to determine how your ServiceNow instance is processing emails. We implemented incident creation from Azure App Insights based on emails.
In ServiceNow find the Inbound Action you need to process the email or make one.
ServiceNow provides every instance with an email account
Refer to enter link description here
The instance email is usually xxxx#service-now.com.
If your instance url is "audi.service-now.com", the email would be "audi#service-now.com".
For a PDI dev#servicenowdevelopers.com, e.g.; dev12345#servicenowdevelopers.com

When is a PubSub Subscription considered to be inactive?

As per Google Cloud documentation:
By default, subscriptions expire after 31 days of inactivity (for instance, if there are no active connections, pull requests, or push successes). If Pub/Sub detects subscriber activity, the subscription deletion clock restarts. Using subscription expiration policies, you can configure the inactivity duration or make the subscription persistent regardless of activity. You can also delete a subscription manually.
Is subscription considered to be inactive even if there are unacked messages on it?
In addition to the documentation you shared, I would like you to pay attention in the following statement, taken from another part part of the documentation,
An unacknowledged message is retained in a subscription for up to message_retention_duration after it is published (the default is 7 days).
Therefore, after publishing the messages to the Subscriber, these messages will be retained for 7 days (default), and the deletion clock will start. If there are no more calls to the subscription the deletion clock will continue counting towards the expiration time because there won't be any activity in the subscription. Also, note that after 7 days these unacknowledged messages will be deleted.
Yes, a subscription is considered inactive if there is no subscriber activity on it, even if there are unacknowledged messages available to that subscription.
Subscription expiration is a configurable subscription property as described here.

Cloud Pub/Sub is available for a specific region. What does this mean? Please confirm

Does the release notes statement that Cloud pub/sub is available now in a specific region mean that the data movement (pushing into the topic, subscribing to a topic, storing message) happens within that region only?
GDPR requirement (Data residency) that data shall not move from the geography where the data originated from.
Please confirm.
Google Cloud Pub/Sub is a global service, available from everywhere. It is not possible to choose where you want your Pub/Sub service to be, but it will be in the closest region to your project location. The fact that is available in a new region means that now that region has the necessary infrastructure to host a Pub/Sub service.
Here you can see the location of Google Cloud products, and also the release notes on Pub/Sub
[EDIT]
Despite this, there are some ways to help you ensure compliance:
As mentioned above, Pub/Sub takes the best effort to keep your data in the closest location to you the source of publication, and once it is stored, that data will not moved without your explicit action.
Pub/Sub provides monitoring on data storage location, thus you can track any potential violations and take action by discarding the backlog using Pub/Sub's seek's functionality or ensuring that it is quickly processed.
Risk can be limited by reducing message retention duration.
Pub/Sub now supports controlling where your message data is stored. Specifically, a topic now has a configurable message storage policy which is a list of GCP regions where Pub/Sub is allowed to store message data on disk. When a message is published to a region not in this list, the request is forwarded to the nearest allowed region for processing. The policy can be configured on a topic or as an organizational policy for a project, project folder or an entire organization. When an organization policy is configured, individual topic policy can be changed only in ways that do not violate the organization policy. See: https://cloud.google.com/pubsub/docs/resource-location-restriction.

RTI DDS two applications publishing data on same domain. When one application closes and reopens it looses the data. How to solve?

I have two publisher and subscriber application.
App1 -> publish -> Student (1,ABC), Student(2,EFG).
After it I run second application. both application subscribe and publish on same domain
App2 able subscribe Student (1,ABC), Student(2,EFG) which is published by App1
then I Published data. App2-> publish -> Teacher(1,AAA),Teacher(2,BBB)
Now I got Student (1,ABC), Student(2,EFG),Teacher(1,AAA),Teacher(2,BBB)
from App2
when I close app2 and reopen again I am unable to subscribe this data
How can I subscribe data which I published before closing application ?
DDS Spy shows data still available on same domain.
Can somebody help to understand?
How can I subscribe data which I published before closing application
?
The behavior that you are looking for is supported by the Durability Quality of Service (QoS) setting. It specifies if published data needs to remain available to be delivered to late joining Subscribers who joined the Domain after the data has been published, and for how long.
There are four different policies that you can select for the Durability QoS. In order of increasing lifetime of the data, they are:
VOLATILE (the default): Updates are delivered only to DataReaders that are present at the time of publication
TRANSIENT_LOCAL: Updates remain available for delivery to DataReaders as long as the DataWriter exists
TRANSIENT: Updates remain available for delivery to DataReaders as long as the Domain exists
PERSISTENT: Updates remain forever available for delivery to DataReaders, even after the Domain has been restarted.
For any of these policies, data is also removed if the dispose() call is used, or if its lifespan period expires.
From your short description, it looks like you need to select the TRANSIENT_LOCAL policy for your Durability QoS.
For more information, see section 2.2.3.4 DURABILITY in the DDS specification, which is freely downloadable from the OMG DDS webpage.

What is the maximum count of Webhook subscrption for Microsoft Graph API?

I'm looking at the Create Subscription API documentation.
I would like to understand how many subscribers can I register for an app. Our system has 2000+ users and we are looking to set up a webhook subscription per user.
This is documented
https://learn.microsoft.com/en-us/graph/webhooks
Maximum subscription quotas:
Per app: 50,000 total subscriptions
Per tenant: 35 total subscriptions across all apps
Per app and tenant combination: 7 total subscriptions
The limits depend on the type of resources you are subscribing to.
For example, if you are subscribing to /users or /groups, then there are limits documented here.
Note that you would likely need a single subscription per tenant to track changes to all users/groups.
If you are subscribing to /messages, then you can create a subscription for each user mailbox.

Resources