I am trying to figure out what is the best possible solution to create a website to function as a weekly programmable timer; after searching many hours I keep coming up empty handed.
I tried to start development on a web page. This is where I wound up:
http://www.gstrip.tk
I have a complete lack of knowledge on how to progress from this point.
How do I store the timers?
How do I make the server search for timers.
Then how do I execute a function such as a simple html request when the stored time for that day matches the current time?
This has been a huge stumbling block for me I assumed there would be an open source web based weekly programmable timer somewhere that I could modify but it appears nearly to be not available.
If it is already posted could you provide a link or something anything as I have searched fairly regularly the last couple of weeks with no results using different terms both on google and on stack. Maybe the syntax of my search is wrong somehow and this may be the reason for the lack of results IDK but I could sure use someone who is understanding and willing to level with me on this issue.
Related
So I am trying to develop a skill that in short allows for users to ask for instructions on how to build various items step by step. However I am running into an issue when, for any given step, an intent will only listen for a response for a total of 16 seconds if a re-prompt is added.
This really hinders the skills because most things you build will have steps that take way longer than 16 seconds.
I am also solely developing this app to be run on devices that support APL so I'm not sure if that information could be of help.
Ive tried using a hacky way of using ssml to play a silent audio file but it can only play files that are at max 90 seconds long, which still isn't enough time. So if any one can lead me in the right direction to solving this issue that would be much appreciated.
So I was recently hired by a big department of a Fortune 50 company, straight out of college. I'll be supporting a brand new ASP.NET MVC app - over a million lines of code written by contractors over 4 years. The system works great with up to 3 or 4 simultaneous requests, but becomes very slow with more. It's supposed to go live in 2 weeks ... I'm looking for practical advice on how to drastically improve the scalability.
The advice I was given in Uni is to always run a profiler first. I've already secured a sizeable tools budget with my manager, so price wouldn't be a problem. What is a good or even the best profiler for ASP.NET MVC?
I'm also looking at adding caching. There is currently no second level and query cache configured for nHibernate. My current thinking is to use Redis for that purpose. Also looking at output caching, but unfortunately the majority of the users will login to the site. Is there a way to still cache parts of the pages served by MVC?
Do you have any monitoring or instrumentation setup for the application? If not, I would highly recommend starting there. I've been using New Relic for a few years with ASP.NET apps and been very happy with it.
Right off the bat you get a nice graph of request response times broken down into 3 kind of tasks that contribute to the response time
.NET CLR - Time spent running .NET code
Database - Time spent waiting on SQL requests
Request Queue - Time spent waiting for application workers to become available
It also breaks down performance by MVC action so you can see which ones are the slowest. You also get a breakdown of performance per database query. I've used this many times to detect procedures that were way too slow for heavy production loads.
If you want to, you can have New Relic add some unobtrusive Javascript to your page that allows you to instrument browser load times. This helps you figure things out like "my users outside North America spend on average 500ms loading images. I need to move my images to a CDN!"
I would highly recommend you use some instrumentation software like this. It will definitely get you pointed in the right direction and help you keep your app available and healthy.
Profiler is a handy tool to watch how apps communicate with your database and debug odd behaviour. It's not a long-term solution for performance instrumentation given that it puts a load on your server and the results require quite a bit of laborious processing and digestion to paint a clear picture for you.
Random thought: check out your application pool configuration and keep and eye out in the event log for too many recycling events. When an application pool recycles, it takes a long time to become responsive again. It's just one of those things can kill performance and you can rip your hair out trying to track it down. Improper recycling settings bit me recently so that's why I mention it.
For nHibernate analysis (session queries, caching, execution time) you could use HibernatingRhinos Profiler. It's developed by the guys that developed nhibernate, so you know it will work really good with it.
Here is the URL for it:
http://hibernatingrhinos.com/products/nhprof
You could give it a try and decide if it helps you or not.
I developed an application for client that uses Play framework 1.x and runs on GAE. The app works great, but sometimes is crazy slow. It takes around 30 seconds to load simple page but sometimes it runs faster - no code change whatsoever.
Are there any way to identify why it's running slow? I tried to contact support but I couldnt find any telephone number or email. Also there is no response on official google group.
How would you approach this problem? Currently my customer is very angry because of slow loading time, but switching to other provider is last option at the moment.
Use GAE Appstats to profile your remote procedure calls. All of the RPCs are slow (Google Cloud Storage, Google Cloud SQL, ...), so if you can reduce the amount of RPCs or can use some caching datastructures, use them -> your application will be much faster. But you can see with appstats which parts are slow and if they need attention :) .
For example, I've created a Google Cloud Storage cache for my application and decreased execution time from 2 minutes to under 30 seconds. The RPCs are a bottleneck in the GAE.
Google does not usually provide a contact support for a lot of services. The issue described about google app engine slowness is probably caused by a cold start. Google app engine front-end instances sleep after about 15 minutes. You could write a cron job to ping instances every 14 minutes to keep the nodes up.
Combining some answers and adding a few things to check:
Debug using app stats. Look for "staircase" situations and RPC calls. Maybe something in your app is triggering RPC calls at certain points that don't happen in your logic all the time.
Tweak your instance settings. Add some permanent/resident instances and see if that makes a difference. If you are spinning up new instances, things will be slow, for probably around the time frame (30 seconds or more) you describe. It will seem random. It's not just how many instances, but what combinations of the sliders you are using (you can actually hurt yourself with too little/many).
Look at your app itself. Are you doing lots of memory allocations in the JVM? Allocating/freeing memory is inherently a slow operation and can cause freezes. Are you sure your freezing is not a JVM issue? Try replicating the problem locally and tweak the JVM xmx and xms settings and see if you find similar behavior. Also profile your application locally for memory/performance issues. You can cut down on allocations using pooling, DI containers, etc.
Are you running any sort of cron jobs/processing on your front-end servers? Try to move as much as you can to background tasks such as sending emails. The intervals may seem random, but it can be a result of things happening depending on your job settings. 9 am every day may not mean what you think depending on the cron/task options. A corollary - move things to back-end servers and pull queues.
It's tough to give you a good answer without more information. The best someone here can do is give you a starting point, which pretty much every answer here already has.
By making at least one instance permanent, you get a great improvement in the first use. It takes about 15 sec. to load the application in the instance, which is why you experience long request times, when nobody has been using the application for a while
I am creating a silverlight pivot collection with 31K items (and images), however when I'm using the DeepZoomTools library to create the deep zoom images; it takes hours and hours (and hasn't actually completed even one).
Is there a multi-threaded way or distributed way in which collections could be created?
It is a time intensive process to be sure. Does your individual data points change often? What we have found in nearly all of our projects that the image for an individual item almost never changes. This allows you to streamline the process a little bit.
What I do in a case like this is to initially process the entire dataset. Then the next time I run the process, I only update the images that have been added or modified. As I said, in almost all of my cases this solved the problem you are running into. In fact, when it works, I will plug my card generation into whatever business applications that are running and generate/modify a card when data is added/changed in the system. This removes the need for batch processing altogether after your initial build.
If that will not work for you, take a look at the code for PAuthor. It is using DeepZoomTools and does so in a multi-threaded way. You should be able to find the code you are looking for there. PAuthor - CodePlex
Let me know if you have more specifics about your specific needs and we can see if we can come up with something.
I have a database, which is a part of a Library Information system. It keeps track of the books borrowed by customers, keeping the due dates and automating the notification of accountability of customers, if a customer has returned a book beyond their due date.
Now, I am using MySQL for the DBMS. What I know is that MySQL's time is dependent on the system time. When checking if a borrowed book has already passed its due date, I would compare the current System time with the due date value associated to the borrowed book. Yeah, the database server will actually be running on a PC running winXP.
My problem is, when the system time gets changed, integrity of the data and checking of accountability gets compromised. Is there a way to work around this? Is there a sort of 'independent time' that I could use? Thanks a lot!
NOTE: Yeah, I'm afraid the application does not have a connection to the Internet.
I think you're trying to program around a problem your application shouldn't worry about. Your app gets time from the computer, you need to be able to rely upon that for accuracy. If the time gets changed, then the time was wrong, so what does that mean for old data? How long was it wrong? It's really not something you can solve programmatically.
A better solution is to make sure the time isn't wrong. Use windows time to sync against a time server to ensure accuracy.
If your PC is running within a Windows domain service, you could also choose to have your computer clock constantly synchronize its time with your domain server using the Windows Time Service.
If your PC has internet access, it can actually set its time against US National Institute of Standards Technology time service. Instructions and overview of how to use it can be found at the NIST Internet Time website.
I would configure an authoritative time server in windows XP. Here is a step by step process.