The data I'm trying to send out through a WCF service is a Northwind Database table (suppliers) which holds about 29 records, and is already exceeding the maximum length of a message I can send. I've looked around for answers and everyone says the same thing: Increase the "maxRecievedMessageSize" in the .config file.
However, this seems very wrong to me - it feels too much like a work around rather than solving the issue (Ex: What if it exceeds the maximum amount I can set it too?). Instead, is there a way to break up the message into chunks? The service itself is modeled by WSSF, so I'm having hard time finding "where" the message is being serialized in the first place (I do not provide code since WSSF provides a very strict template to work on, as I'm aware).
Side-Note/Question: I have a "backup" plan where I can execute a stored command onto the database that only brings back 10 rows of data (at a specified starting point when calling the function). However, I would have to call the function that does this several times. Would this still be better than breaking the message into chunks?
I apologize for not displaying any code but I feel as though it will only cause more confusion. If it is necessary, then I will try and clear this question up to the best of my ability asap. Thank you for your contribution!
Provide Skip and Take properties on your request object to allow the client to control paging.
Related
I am intersted in replacing my current data format that I use with GTFS, but I hear and read from here and there that there are flaws in GTFS file format.
Most of the time I see that you can't somehow predict some things such as delays or some real-time stuff. They say you can't get the "whole picture" with them.
So what I am asking is there anyone more experienced with GTFS , since I am seeing them only for first time, that could have possibly used GTFS in some kind of application and could tell the problems they have faced while developing?
Maybe someone has a suggestion about a better kind of file format? Or a combination of some formats?
It's hard to say whether GTFS is a good fit or not for your application without knowing what your application's requirements are, but I can offer a few remarks.
If your goal is to provide real-time data to users you should take a look at GTFS-realtime, a complementary data format designed specifically for issuing real-time updates. For most public-transit applications, using a GTFS and a GTFS-realtime feed together does indeed give the "whole picture" about a transit network, or near enough.
In terms of GTFS itself, my main complaint is that it seems designed specifically for route-planning applications and using data in this format for any other purpose can be difficult. For example, while a GTFS feed records information about transit stops and routes, there is no requirement that each of these have a single, canonical entry—if the data spans multiple board periods, there will almost always be (seemingly) duplicate entries for each.
This doesn't matter if you're plotting a route based on where and when a person is travelling, since the links between objects ensure you'll always generate the right result. If you're starting with only a person's location and want to know, "What transit resources are available nearby?", reliably producing an accurate answer requires some contortions.
It depends on your needs for importing existing feeds. If yes, then you need to be able to handle it anyhow. In my case, import was required, so I use the same for data that stems from other formats like PDF timetables. Otherwise you need to supoprt two formats. If you do not need it for import (or export) you may consider your own format : I find GTFS does not reveal the actual network.
GTFS needs quite some interpretation and digesting in order to end up with the whole picture that you can answer planning questions on.
I merge stops together if they are close, like a few meters apart, and assume 'trivial walk' if 10-50 meters. That automatically handles combining multipe feeds.
Apart from that, I turn the stop_times roughly inside-out to create a 'link' table'. The end result is that for each stop you have a list of departures and their destinations.
Biggest problem until now is that GTFS feeds can record the trips from an operator point of view. Passengers can remain sitting in the bus if it flips the headsign from 351 to 285, takes a new driver onboard and continues. That means you need to know what trips actually need to be seen as joined in passenger terms.
I solved minor problem for manual feed entry by having my GTFS parser accept a handful of constructs that ease editing, such as leaving out the sequence numbers to have it generated incrementally, and recognising 02.13+1 as 26.13.
Im looking for a way to manually adjust TFS task startdates so that my burndown appears correct.
Essentially the iteration has fixed start/end dates and some user stories did not get filled out until half way through the iteration.
This made the burndown have a bump in the road so it looks like we are below target.
I have full access to the TFS database and am wondering what queries I would need to write to get my tasks backdated to the start of the iteration.
I have read somewhere that it is System.AuthorizedDate that controls the burndown chart.
Any help appreciated.
J
You are correct on System.AuthorizedDate being used.
You won't be able to change the System.AuthorizedDate by means of the public API. It won't let you. And you cannot change the System.AuthorizedDate date by means of SQL update commands and remain in a supported state. Officially, Microsoft does not allow this and still maintain the ability for Microsoft to support you unless the SQL enacted changes where made under their guidance such as through a support incident.
I doubt a support incident to Microsoft will yield the update query as it's not a defect and as I explain later it could put you in a very bad place. Could you create a series of updates on the appropriate tables to backdate the System.AuthorizedDate? Without doubt. It might even work but I am not certain if it would work if you dared to do so. The reason is that the work items receive System.Id numbers sequentially as created. I do know that in version control there are expectations in the system that a higher changeset number must have a later commit date (Can't recall the exact field name) than any lower changeset number. It would not surprise me if there are similar expectations in the system for work items. You might find that with such a change to the field in the work item from SQL would render errors or unexpected outcomes in various places - I can imagine a future upgrade or even an update simply bombing and unable to perform. That's all hypothetical though because unless you wish to have your environment in an unsupported state you would not change it via SQL.
Outside creating your own burndown that evaluated differently I am not aware of a means to meet your desired goal under those conditions.
I'm working on a basic syncing algorithm for a user's notes. I've got most of it figured out, but before I start programming it, I want to run it by here to see if it makes sense. Usually I end up not realizing one huge important thing that someone else easily saw that I couldn't. Here's how it works:
I have a table in my database where I insert objects called SyncOperation. A SyncOperation is a sort of metadata on the nature of what every device needs to perform to be up to date. Say a user has 2 registered devices, firstDevice and secondDevice. firstDevice creates a new note and pushes it to the server. Now, a SyncOperation is created with the note's Id, operation type, and processedDeviceList. I create a SyncOperation with type "NewNote", and I add the originating device ID to that SyncOperation's processedDeviceList. So now secondDevice checks in to the server to see if it needs to make any updates. It makes a query to get all SyncOperations where secondDeviceId is not in the processedDeviceList. It finds out its type is NewNote, so it gets the new note and adds itself to the processedDeviceList. Now this device is in sync.
When I delete a note, I find the already created SyncOperation in the table with type "NewNote". I change the type to Delete, remove all devices from processedDevicesList except for the device that deleted the note. So now when new devices call in to see what they need to update, since their deviceId is not in the processedList, they'll have to process that SyncOperation, which tells their device to delete that respective note.
And that's generally how it'd work. Is my solution too complicated? Can it be simplified? Can anyone think of a situation where this wouldn't work? Will this be inefficient on a large scale?
Sounds very complicated - the central database shouldn't be responsible for determining which devices have recieved which updates. Here's how I'd do it:
The database keeps a table of SyncOperations for each change. Each SyncOperation is has a change_id numbered in ascending order (that is, change_id INTEGER PRIMARY KEY AUTOINCREMENT.)
Each device keeps a current_change_id number representing what change it last saw.
When a device wants to update, it does SELECT * FROM SyncOperations WHERE change_id > current_change_id. This gets it the list of all changes it needs to be up-to-date. Apply each of them in chronological order.
This has the charming feature that, if you wanted to, you could initialise a new device simply by creating a new client with current_change_id = 0. Then it would pull in all updates.
Note that this won't really work if two users can be doing concurrent edits (which edit "wins"?). You can try and merge edits automatically, or you can raise a notification to the user. If you want some inspiration, look at the operation of the git version control system (or Mercurial, or CVS...) for conflicting edits.
You may want to take a look at SyncML for ideas on how to handle sync operations (http://www.openmobilealliance.org/tech/affiliates/syncml/syncml_sync_protocol_v11_20020215.pdf). SyncML has been around for a while, and as a public standard, has had a fair amount of scrutiny and review. There are also open source implementations (Funambol comes to mind) that can also provide some coding clues. You don't have to use the whole spec, but reading it may give you a few "ahah" moments about syncing data - I know it helped to think through what needs to be done.
Mark
P.S. A later version of the protocol - http://www.openmobilealliance.org/technical/release_program/docs/DS/V1_2_1-20070810-A/OMA-TS-DS_Protocol-V1_2_1-20070810-A.pdf
I have seen the basic idea of keeping track of operations in a database elsewhere, so I dare say it can be made to work. You may wish to think about what should happen if different devices are in use at much the same time, and end up submitting conflicting changes - e.g. two different attempts to edit the same note. This may surface as a change to the user interface, to allow them to intervene to resolve such conflicts manually.
How do you go about collecting and storing data which was not part of the initial database and software design? For example, if you've come up with a pointing system, you have to collect the points for every user which has already been registered. For new users, that would be easy, because the changes of the business logic will reflect the pointing system ... but the old ones?
In general, how does one deal with data, which should have been there from the beginning, but wasn't? Writing manual queries to collect the missing pieces? Using crons?
Well, you are asking for something that is by definition not possible, I think.
deal with data hich should have been there from the beginning, but wasn't?
Because if you are able to deduce the number of points from the existing data in the database. If that were possible, there is obviously no missing data.... Storing the points separately would make it redundant (still a fine option in case you need that for performance).
For example: stackoverflow rewards number of consecutive visits. Let's say they did not do that from the start. If they were logging date-of-visit already, you can recalc the points. So no missing data.
So if that is not possible, you need another solution: either get data from other sources (parse a webserver log) or get the business to draft some extra business rules for the determination of the default values for the existing users (difficult in this particular example).
Writing manual queries to collect the missing pieces? Using crons?
I would populate that in a conversion script or even in a special conversion application if very complex.
I have several horrors of old ASP web applications. Does anyone have any easy ways to find what scripts, pages, and stored procedures are no longer needed? (besides the stuff in "old___code", "delete_this", etc ;-)
Chances are if the stored proc won't run, it isn't being used because nobody ever bothered to update it when sonmething else changed. Table colunms that are null for every single record are probably not being used.
If you have your sp and database objects in source control (and if you don't why don't you?), you might be able to reaach through and find what other code it was moved to production with which should give you a clue as to what might call it. YOu will also be able to see who touched it last and that person might know if it is still needed.
I generally approach this by first listing all the procs (you can get this from the system tables) and then marking the ones I know are being used off the list. Profiler can help you here as you can see which are commonly being called. (But don't assume that because profiler didn't show the proc that it isn't being used, that just gives you a list of the ones to research.) This makes the ones that need to be rearched a much smaller list. Depending on your naming convention it might be relatively easy to see what part of the code should use them. When researching don't forget that procs are called in places other than the application, so you will need to check through jobs, DTS or SSIS packages, SSRS reports, other applications, triggers etc to be sure something is not being used.
Once you have identified a list of ones you don't think you need, share it with the rest of the development staff and ask if anyone knows if the proc is needed. You'll probably get a a couple more taken off the list this way that are used for something specialized. Then when you have the list, change the names to some convention that allows you to identify them as a candidate for deletion. At the same time set a deletion date (how far out that date is depends on how often something might be called, if it is called something like AnnualXYZReport, then make that date a year out). If no one complains by the deletion date, delete the proc (of course if it is in source control you can alawys get it back even then).
Onnce you have gone through the hell of identifying the bad ones, then it is time to realize you need to train people that part of the development process is to identify procs that are no longer being used and get rid of them as part of a change to a section of code. Depending on code reuse, this may mean searching the code base to see if someother part of the code base uses it and then doing the same thing discussed as above, let everyone know it will be deleted on this date, change the name so that any code referncing it will break and then on the date to delete getting rid of it. Or maybe you can have a meta data table where you can put candidates for deletion at the time you know that you have stopped using something and send a report around to everyone once a month or so to determine if anyone else needs it.
I can't think of any easy way to do this, it's just a matter of identifying what might not be used and slogging through.
For SQL Server only, 3 options that I can think of:
modify the stored procs to log usage
check if code has no permissions set
run profiler
And of course, remove access or delete it and see who calls...