First I apologize for posting a question that is discussed in past.
After searching for days I still don't understand how should I find out that a device with 1k or even higher resolution is a smartphone.
For example my monitor is 1280 x 1024 and I want to have 3 columns on desktop screens but how should I know that a 1k and higher res belongs to a smartphone that I need to shrink my layout to one column.
Related
I was wondering about how much .scn data I should have in my SCNKit application? Are there any limitations except the general iOS app size limitations (found in this post)?
I'd expect that it should be a reasonable size compared to the device RAM. So to be safe for the app not crashing on a specific device it should avoid getting to close to the limits mentioned in this post e.g if the RAM is 1GB I should just stay away from anything around 400MB to be safe?
Further what size of .scn file can I have loaded at one time? So e.g. if there are 6 scenes of 40MB each, it might be not a good idea to have all of them loaded into memory at one time (240MB) but okay to swap them around when needed? But this would still be a big amount of data when having the over-the-air limit in mind.
I lack experience in terms what can I expect from an application using SCNKit scenes and couldn't find very much to hold on.
Seems that it's incorrectly to talk about the Apple's game engines (like SceneKit or RealityKit) in such a context. As you know, game engines render 3D content in real time at 60 fps. If there is a drop frame, the app displays the content intermittently. As soon as you cross the "red line" you'll get notorious drop frame.
What can make you cross this "red line"? It's a good question. Usually, scenes containing more than 100K polygons in total (especially in iOS), heavy-contact physics, emission of a large number of particles, Hi-res 4K textures, realtime shadows and large number of PBR materials can make your app to guaranteedly start skipping frames.
How to deal with it? There is no miraculous remedy. As soon as you notice that your app "stutters", first of all, you need to control a number of polygons (keep it under 100K) and textures' size (keep them under 2K). Physics and particles are more difficult to control, but they also need to be controlled. For example, a single particle emitter will work nicely even if it emits about 20K particles per second (with a moderate lifespan), not millions of them.
This table helps you understand what certain entities affect (macOS version).
SceneKit entity
What does it affect?
Hi-rez textures (4K or 8K)
Consume considerably more RAM
Higher number of polygons
Increases CPU processing and consumes more RAM
Higher number of particles
Increases GPU / CPU processing, consumes more RAM
Higher number of PBR materials
Increases GPU processing and consumes more RAM
Hi-rez poly-models with forward shadows
Considerably increase GPU processing
I'm not too sure what the best place for this is
I'm working on an app that requires me to find points of interest thats within a specific radius of a users location.
For example, I grab the users location as Lat and Long coordinates and want to find all the items within a 20 mile radius.
Right now I have a MySQL database with 450,000 records with each record containing a Lat and Long. I then run a prepared statement to grab X amount of records within a 20 meter radius.
This is quite slow and intensive on the database.
Are there better ways to optimise lookups when using MySQL or is there a purpose built system?
Right now this is a hobby project so affording a service that does this may be out of my $0 budget.
Any and all suggestions are appreciated.
Could I set DataStream time window to a large value like 24 hours? The reason for the requirement is that I want to make data statistics based on the latest 24 hours client traffic to the web site. This way, I can check if there are security violations.
For example, check if a user account used multiple source IPs to log on to the web site. Or check how many unique pages a certain IP accessed in the latest 24 hours. If security violation is detected, the configured action will be taken in real time such as blocking the source IP or locking the relevant user account.
The throughput of the web site is around 200Mb/s. I think setting the time window to a large value will cause memory issue. Should I store the statistics results of each time window like 5 minutes into database?
Then make statistics based on database query for the date generated in the latest 24 hours?
I don't have any experience with big data analysis. Any advice will be appreciated.
It depends on what type of window and aggregations we're talking about:
Window where no eviction is used: in this case Flink will only save one accumulated result per physical window. This means that for a sliding window of 10h with 1h slide that computes a sum it would have to have a number 10 times. For a tumbling window (regardless of the parameters) it only saves the result of the aggregation once. However this is not the whole story: because state is keyed you have to multiply all of this for every distinct value of the field used in the group by.
Window with eviction: saves all events that were processed but still weren't evicted.
In short, generally the memory consumption is not tied to how many events you processed or the window's durations but to:
The number of windows (considering that one sliding window actually maps to several physical windows).
The cardinality of the field you're using in the group by.
All things considered, I'd say a simple 24-hour window has an almost nonexistent memory footprint.
You can check the relevant code here.
I'm doing a project with ElasticSearch and the goal of the project is to optimise the time of request, now I'm trying with 1Go of data and the request took about 1200ms, I wanna calculate the time with 60Go of data, I'm asking if there is techniques to calculate complexity of my query ?
Thanks
Unfortunately, it's not as easy as extrapolating, i.e. if the request takes 1200ms with 1GB of data, it doesn't mean it'll take 60 times more with 60GB. Going from 1GB to 60GB of data has unpredictable effects and totally depends on the hardware you're running your ES on. The server might be OK for 1GB but not for 60GB, so you might need to scale out, but you won't really know until you have that big an amount of data.
The only way to really know is to get to 60GB of data (and scale your cluster appropriately (start big & scale down) and do your testing on a real amount of data.
I am creating a crawler that, for arguments sake, will crawl 1 billion pages. I know that is the absolute maximum number of pages I will ever crawl and I know I need to store as much information about each page on the internet. The crawler is nutch with soir.
How can I reliably decide on the size hard disk I will need to maintain this amount of data? I can't find any information on how much space a record will take up in nutch. And I need to know so I can see how realistic is it is to host this on one drive, and if not, what my other options are.
If it takes up 1 kilobyte per page, 1 billion pages will need = 1 000 000 000 / 1024 / 1024 = 95 Terabyte. This is a LOT. But if it is half a byte per page, or pehaps 25% or less off a byte, which would make storing it on only a few servers far more realistic.
You've already done an estimate, but your estimate is probably way off. Almost no modern web page is only 1kb in size (MSN.com is 319KB (58.8KB gzipped) - but 1B web pages is, depending on who you're asking, a measurable amount of the relevant pages on the internet today. And keep in mind, you probably don't just want to store the actual page content, but also index it. This will include several indices, depending on what kind of use you're expecting from the index. Much of the content will probably also be parsed and transformed into other content as well, which will be indexed separately for different usage.
So the only answer possible for such as question is "it depends", and "good luck". Additionally, 95TB is not A LOT of storage today, and could be handled by a single server (storage wise - index usage and query counts will require more servers, but it all depends on what you're going to be using stuff for).
Start somewhere and see where it takes you.