JD Edwards - Forecasting remaining cycle time - forecasting

I work for a manufacturing company that is using JD Edwards 8.12.
My task is now to estimate the number of work orders that will be delivered on time.
Each work order has a specific requested date but due to issues and challenges, the original requested date is no longer realistic.
Is there any technique/ approach to recalculate the requested date based on the current work center?

Related

How to model a user's calendar events with firebase

I have users, each user gets assigned 12 events(they can reschedule these events etc) every 2 months. Each event is an object with id, name, description, date, is completed.
I'm currently saving these events in the user's document so that I do only one document read. events:[{events}*12] after a year there will be 72 events in this array, and it would keep growing year after year.
I'm wondering, should I be concerned with the 1mb limit?
I'd like to preserve history, so that the user can also view events of the past.
Given that on the calendar at most you could see one months worth of events, and say I lazy loaded the previous month for speed, doing a subcollection for events would result to 12-24 document reads. I fear this would get expensive very quick.
Any advice would be appreciated, thanks.
Honestly, I wouldn't be too concerned with the 1MB limit, that is still a lot of characters (roughly 1 million, although may be a bit less depending on data types) - so unless the descriptions could be incredibly long I think it's unlikely you will reach anywhere near those limits.
That being said, if it is a concern you could schedule a cloud function to periodically (perhaps every 3 months) to archive or move events to a subcollection that are no longer still of use, storing across more documents (to represent the quarter, or year, or whatever time period you decide on)

Do MWS API report time limits apply to data being added or data within the report?

I’m looking specifically at the FBA Customer Shipment Sales Report report, but I believe the question applies more generally to most reports.
One of the columns in the report is the “Shipment Date”. When I request this report via the MWS API, I can specify a StartDate and an EndDate. Do these dates filter on the “Shipment Date” column, or do they instead filter based on the date that the data was added to the report?
For example, if an order ships at 2019-07-29T12:00:00Z, but Amazon doesn’t actually add it to the report until an hour later at 2019-07-29T13:00:00Z, then if I generate this report with an EndDate of 2019-07-29T12:00:00Z, will this shipment appear in the report? Or will it only appear if the EndDate is greater than or equal to 2019-07-29T13:00:00Z since that’s the time the shipment was actually added to the report?
I understand that in general this report is near real-time, so it may not matter 99% of the time, but I’m concerned about the rare times where the data my be delayed coming into the report. I want to make sure I will still be able to see the new data based on my data filters.
I think I found my answer here: https://sellercentral.amazon.com/gp/help/200453120?language=en_US&ref=ag_200453120_cont_201074420
It says:
The report contains all completed shipments reported to FBA during the specified time period. This may not include all items that were shipped during that time frame if they have not yet been reported to our system. Those items will be reported in a future time period. This ensures that the report data will always be consistent for any given date range.
And:
Shipment dates are based on when the shipment was reported to the system, which is generally a few hours after the actual ship date. Other reports may calculate shipment dates differently.
So the answer is actually that the "Shipment Date" is the date the shipment was reported and added to the report, which is not necessarily the same as the date and time the shipment actually took place.

Database Design: How do I handle tracking goals vs. actuals over time?

This isn't exactly a programming question, as I don't have an issue writing the code, but a database design question. I need to create an app that tracks sales goals vs. actual sales over time. The thing is, that a persons goal can change (let's say daily at most).
Also, a location can have multiple agents with different goals that need to be added together for the location.
I've considered basically running a timed task to save daily goals per agent into a field. It seems that over the years that will be a lot of data, but it would let me simply query a date range and add all the daily goals up to get an goal for that date range.
Otherwise, I guess I could simply write changes (i.e. March 2nd - 15 sales / week, April 12th, 16 sales per week) which would be less data, but much more programming work to figure out goals based on a time query.
I'm assuming there is probably a best practice for this - anyone?
Put a date range on your goals. The start of the range is when you set that goal. The end of the range starts off as max-collating date (often 9999-12-31, depending on your database).
Treat this as "until forever" or "until further notice".
When you want to know what goals were in effect as of a particular date, you would have something like this in your WHERE clause:
...
WHERE effective_date <= #AsOfDate
AND expiry_date > #AsOfDate
...
When you change a goal, you need two operations, first you update the existing record (if it exists) and set the expiry_date to the new as-of date. Then you insert a new record with an effective_date of the new as-of date and an expiry_date of forever (e.g. '9999-12-31')
This give you the following benefits:
Minimum number of rows
No scheduled processes to take daily snapshots
Easy retrieval of effective records as of a point in time
Ready-made audit log of changes

DynamoDB: How to distribute workload over the month?

TL;DR
I have a table with about 2 million WRITEs over the month and 0 READs. Every 1st day of a month, I need to read all the rows written on the previous month and generate CSVs + statistics.
How to work with DynamoDB in this scenario? How to choose the READ throughput capacity?
Long description
I have an application that logs client requests. It has about 200 clients. The clients need to receive on every 1st day of a month a CSV with all the requests they've made. They also need to be billed, and for that we need to calculate some stats with the requests they've made, grouping by type of request.
So in the end of the month, a client receives a report like:
I've already come to two solutions, but I'm not still convinced on any of them.
1st solution: ok, every last day of the month I increase the READ throughput capacity and then I run a map reduce job. When the job is done, I decrease the capacity back to the original value.
Cons: not fully automated, risk of the DynamoDB capacity not being available when the job starts.
2nd solution: I can break the generation of CSVs + statistics to small jobs in a daily or hourly routine. I could store partial CSVs on S3 and on every 1st day of a month I could join those files and generate a new one. The statistics would be much easier to generate, just some calculations derived from the daily/hourly statistics.
Cons: I feel like I'm turning something simple into something complex.
Do you have a better solution? If not, what solution would you choose? Why?
Having been in a similar place myself before, I used, and now recommend to you, to process the raw data:
as often as you reasonably can (start with daily)
to a format as close as possible to the desired report output
with as much calculation/CPU intensive work done as possible
leaving as little to do at report time as possible.
This approach is entirely scaleable - the incremental frequency can be:
reduced to as small a window as needed
parallelised if required
It also, makes possible re-running past months reports on demand, as the report generation time should be quite small.
In my example, I shipped denormalized, pre-processed (financial calculations) data every hour to a data warehouse, then reporting just involved a very basic (and fast) SQL query.
This had the additional benefit of spreading the load on the production database server to lots of small bites, instead of bringing it to its knees once a week at invoice time (30000 invoiced produced every week).
I would use the service kinesis to produce a daily and almost real time billing.
for this purpose I would create a special DynamoDB table just for the calculated data.
(other option is to run it on flat files)
then I would add a process which will send events to kinesis service just after you update the regular DynamoDB table.
thus when you reach the end of the month you can just execute whatever post billing calculations you have and create your CSV files from the already calculated table.
I hope that helps.
Take a look at Dynamic DynamoDB. It will increase/decrease the throughput when you need it without any manual intervention. The good news is you will not need to change the way the export job is done.

Large number of entries: how to calculate quickly the total?

I am writing a rather large application that allows people to send text messages and emails. I will charge 7c per SMS and 2c per email sent. I will allow people to "recharge" their account. So, the end result is likely to be a database table with a few small entries like +100 and many, many entries like -0.02 and -0.07.
I need to check a person's balance immediately when they are trying to send an email or a message.
The obvious answer is to have cached "total" somewhere, and update it whenever something is added or taken out. However, as always in programming, there is more to it: what about monthly statements, where the balance needs to be carried forward from the previous month? My "intuitive" solution is to have two levels of cache: one for the current month, and one entry for each month (or billing period) with three entries:
The total added
The total taken out
The balance to that point
Are there better, established ways to deal with this problem?
Largely depends on the RDBMS.
If it were SQL Server, one solution is to create an Indexed view (or views) to automatically incrementally calculate and hold the aggregated values.
Another solution is to use triggers to aggregate whenever a row is inserted at the finest granularity of detail.

Resources