GMB PubSub trigger notifications with delay or not trigger - google-cloud-pubsub

I create topic.
Attach push-subscription with target to my server.
Set the email mybusiness-api-pubsub#system.gserviceaccount.com as an administrator.
Update notification settings with valid topic name and next notificationTypes:
GOOGLE_UPDATE
NEW_REVIEW
UPDATED_REVIEW
NEW_QUESTION
UPDATED_QUESTION
NEW_ANSWER
UPDATED_ANSWER
Did a test by manually posting a message and got the webhook successfully.
But, when I edit review reply - I recive part of events and with delay up to 30 minutes. Part of notifications don't receive.
If I try to write question or answer - I don't receive any events.
Question:
why events do not come for questions
why events are delayed
how to fix the previous two points

Concerning NEW_QUESTION, UPDATED_QUESTION, NEW_ANSWER, UPDATED_ANSWER, this is a bug (more see here Google My Business push notification is not sent for Question and Answers ).
Concerning review updates, we are also seeing such delays. My guess is this is due to Google's publishing/QA pipeline where changes are first checked if they meet the guidelines before they are published globally. You yourself may see your changes right after editing, but that does not mean that the whole world will.
There's nothing you can do about either case, unfortunately.

Related

Durable Function "monitor" in portal and Orchestration replay events

I've set logReplayEvents to false even though that's the default just to be sure but I'm still seeing multiple entries for my orchestration function for a single invocation in the Monitor section on the Azure portal:
Any idea how, if it's possible, I can change this so it just shows one.
I hope you have set LogReplayEvents to false under 'durableTask' key (i.e., disabled emitting verbose orchestration replay events) in the host.json file.
And yeah, by default all non-replay tracking events are emitted.
Perhaps I am missing something but looking at the screenshot provided by you, I cannot really figure out that those logs are related to replay events. That's fine if you have checked each event and found that they are replay event, or else can you go to your App Insights and run below query to see if any replay events are present or not.
traces| extend isReplay = tobool(tolower(customDimensions["prop__isReplay"]))| where isReplay == "True"
Hope this helps at least up to some extend!! Cheers!! :)

will gatling actually perform the operation or will it check only the urls' response time?

I have a gatling test for an application that will answer a survey and upon answering this survey, the application will identify possible answers that may pose a risk and create what we call riskareas. These riskareas are normally created in the background as soon as the survey answering is finished. My question is I have a gatling test with ten users who will go and answer the survey and logout, I used recorder to record the test; now after these ten users are finished I do not see any riskareas being created in the application. Am I missing something--should the survey be really answered by gatling (like it does in selenium) user or is it just the urls that the gatling test will touch ?
I am new to gatling please help.
Gatling should be indistinguishable from a user in a web browser (or Selenium) as far as the server is concerned, so the end result should be exactly the same as if you'd gone through the process yourself. However, writing a Gatling script is a little more work than writing a Selenium script.
For performance reasons, Gatling operates at a lower level than Selenium. Gatling works with the actual data that is sent and received from the server (i.e, the actual GETs and POSTs sent to the server), rather than with user-level interactions (such as clicking links and filling forms).
The recorder will generally produce a relaitvely "dumb" script. It records the exact data that was sent to the server, and makes no attempt to account for things that may change from run to run. For example, the web application you are testing might have hidden form fields that contain session information, or the link addresses might contain a unique identifier or a session id.
This means that your script may not be doing what you think it's doing.
To debug the script, the first thing to do is to add checks on each of the requests, to validate that you are getting the response you expect (for example, check that when you submit page 1 of the survey, you are taken to page 2 - check for something that you'd only expect to find on page 2, like a specific question).
Once you know which requests are failing, look at what data was sent with the request, and try to figure out where it came from. You will probably find that there are session ids, view state, or similar, that must be extracted from the previous page.
It will help to enable request and response logging, as per the documentation.
To simplify testing of web apps, we wrote some helper functions to allow tests to be written in a more Selenium-like way. Once you understand what your application is doing, you may find that it simplifies scripting for you too. However, understanding why your current script doesn't work the way you expect should be your first step.

Django forum database design

I'm extending an existing Django forum app (DjangoBB) to allow users to opt in/out of notifications for specific forums and/or specific threads. I already have an existing notification infrastructure on my site, and an existing global setting to turn forum notifications on/off.
What I want to do is allow each user to subscribe/unsubscribe from specific forums or threads, effectively overriding their global setting. For example, if my global setting is 'Notifications Off', then I would not get any notifications, unless I set a specific forum to 'Notifications On', in which case I'll get notifications for everything in that forum, unless I set a thread in that forum to 'Notifications Off'. If any thread or topic doesn't have a setting set, it defaults up the chain.
My question is what's the best way to represent all of this in my database in Django? I have two ideas:
class ForumNotificationSetting(models.Model):
user = models.ForeignKey(User)
forum = models.ForeignKey(Forum)
setting = models.NullBooleanField(default=None)
I would also have a similar model for TopicNotificationSetting. This way, whenever a user changes their forum settings, I create (or modify) a NotificationSetting object to reflect their new settings. This method seems simple, but the downside is if lots of people create settings for lots of forums, then creating notifications for everyone would be O(n^2) whenever a new forum post is made. (I think...?)
The second way I can think of would be to create a ManyToMany field in each user's profile for all forums to which they have set settings for. This way seems more complicated (since I'd have to have 4 total M2M fields for 'forums on', 'forums off', 'topics on', and 'topics off'), and I'm not sure if there would be any performance benefits or not.
How would you solve this problem?
Perhaps you could create an M2M relationship on the level of the Thread or Forum object, e.g.:
class Thread(models.Model):
subscribers = models.ManyToManyField(User)
other attributes...
Then, as a user (un)subscribes from a Thread, you add or delete them from the Thread object's subscribers property.
In your application logic, when a user creates a post, thread or forum, you could just check their overall notification setting and - if set to True - add them to to the subscribers property of the newly created Thread, Post or Forum.
Each time a thread, post or forum is updated, you could then notify all subscribers.
I'd assume you'd need some logic to handle escalation (I can't find the proper word, sorry), i.e. I'd assume a notification for a Thread update should also trigger a notification for a Forum update (or should it?). And then you should of course take care that a user doesn't get spammed with post, thread and forum notifications for a single Post update...

What's a good method of runtime error reporting for my WPF/C# 3.5 client app?

I've thought of writing a service method that I'd call within the catch block of a try/catch that writes error details to a table for viewing. Then I thought about if the services went down, the client app would have no way of reporting this data. This lead me to the thought of popping up a text box containing the exception details and a Copy button. The user would click the copy button to copy the text and paste it into an email to our support group.
It may sound crude, but I am new to client app development and haven't really given this much thought until now.
Use the Application.DispatcherUnhandledException Event. See this question for a summary (see Drew Noakes answer).
Be aware that there'll be still exceptions which preclude a successful resuming of your application, like after a stack overflow, exhausted memory or lost network connectivity while you're trying to save to the database.

BizTalk 2006 - Copy a received file to a new directory

I want to be able to copy the file I have which comes in as XML into a new folder location on the server. Essentially I want to hold a back up of the input files in a new folder.
What I have done so far is try to follow what has been said on this forum post - link text
At first I tried the last method which didn't do anything (file renaming while reading). So I tried one of the other options and altered the orchestration and put a Send shape just after the Receive shape. So the same message that comes in is sent out to the logical port. I export the MSI, and I have created a Send Port in the Admin console which has been set to point to my copy location. It copies the file but it continues to create one every second. The Event Viewer also reports warnings saying "The file exists". I have set the Copy Mode of the port to 'overwrite' and 'Create New', both are not working.
I have looked on Google but nothing helps - BTW I support BizTalk but I have no idea how pipelines, ports work. So any help would be appreciated.
thanks for the quick responses.
As David has suggested I want to be able to track the message off the wire before BizTalk does any processing with it.
I have tried to the CodePlex link that Ben supplied and its points to 'Atomic-Scope's BizTalk Message Archiving Pipeline Component' which looks like my client will have to pay for. I have downloaded the trial and will see if I have any luck.
David - I agree that the orchestration should represent the business flow and making a copy of a file isn't part of the business process. I just assumed when I started tinkering around I could do it myself in the orchestration as suggested on the link I posted.
I'd also rather not rely on the BizTalk tracking within the message box database as I suppose the tracked messages will need to be pruned on a regular basis. Is that correct or am I talking nonsense?
However is there a way I can do what Atomic-Scope have done which may be cheaper?
**Hi again, I have figured it out from David's original post as indicated I also created a Send port which just has a "Filter" expression like - BTS.ReceivePortName == ReceivePortName
Thanks all**
As the post you linked to suggests there are several ways of achieving this sort of result.
The first question is: What do you need to track?
It sounds like there are two possible answers to that question in your case, which I'll address seperately.
You need to track the message as received off the wire before BizTalk touches it
This scenario often arises where you need to be able to prove that your BizTalk solution is not the source of any message corruption or degradation being seen in messages.
There are two common approaches to this:
Use a pipeline component such as the one as Ben Runchey suggests
There is another example of a pipeline component for archiving here on codebetter.com. It looks good - just be careful if you use other components, and where you place this component, that you are still following BizTalk streaming model proper practices. BizTalk pipelines are all forwardonly streaming, meaning that your stream is readonly once, and all the work on them the happens in an eventing manner.
This is a good approach, but with the following caveats:
You need to be careful about the streaming employed within the pipeline component
You are not actually tracking the on the wire message - what your pipeline actually sees is the message after it has gone through the BizTalk adapter (e.g. HTTP adapter, File etc...)
Rely upon BizTalk's out of the box tracking
BizTalk automatically persists all messages to the message box database and if you turn on BizTalk tracking you can make BizTalk keep these messages around.
The main downside here is that enabling this tracking will result in some performance degradation on your server - depending on the exact scenario, this may not be a huge hit, but it can be signifigant.
You can track the message after it has gone through the initial receive pipeline
With this approach there are two main options, to use a pure messaging send port subscribing to the receive port, to use an orchestration send port.
I personally do not like the idea of using an orchestration send port. Orchestrations are generally best used to model the business flow needed. Unless this archiving is part of the business flow as understood by standard users, it could simply confuse what does what in your solution.
The approach I tend to use is to create a messaging send port in the BizTalk admin console that subscribes to your receive port. The send port will then just use a standard BizTalk file adapter, with a pass through pipeline.
I think you should look at the Biztalk Message Archiving pipeline component. You can find it on Codeplex (http://www.codeplex.com/btsmsgarchcomp).
You will have to create a new pipeline and deploy it to your biztalk group. Then update your receive pipeline to archive the file to a location that the host this receive location is running under has access to.

Resources