Blockchain usage in smart home - database

I am currently gathering information for my thesis about smart home implementaton with a use of blockchain. The blockchain usage is a requirement from my thesis supervisor and I have huge problem with finding out how blockchain technology may be useful in home automation.
What I have already considered is that there are two types of blockchain which I can use: private and public.
The public blockchain won't be useful at all, because of long time to achieve consensus and every transaction costs money (fee for miners).
I also don't see any adventage of private blockchain over regular database in such application. There are two reasons:
-I won't be able to store blockchain on every smart home device, because they all have limited space. So If I need to store blockchain in some centralised way, I think it looses it's immutability adventage.
-The public key cryptography is a very nice thing, but I can archieve that also in a regular database, so I don't see the need to implement blockchain for that.
So am I not seeing something? How use of blockchain may be helpful is such a small project?
Thanks in advance for any advice! :)

I think it is related to developing because it always begins to make the right choice to use blockchain for smart home devices. There are blockchains out there which have a focus for devices with small storage, for example, IOTA.
As far as I understand you are looking for a solution that doesn't have a SPoF (Single Point of Failure) and which is decentralized, without using the storage of the device.
Personally, I think BigChainDB (https://www.bigchaindb.com/developers/getstarted/) can be the best solution for you. Setup some nodes and let the devices authenticate with it though MongoDB or simplify it with API access. It's a great base for deploying decentralized applications, such as yours.

Related

Is there a simple way to host a program with a small data file (esp. on Heroku)?

If you go to my Heroku-hosted to do list program, you can put test data in, but it's gone pretty soon. This is because, I learned, Heroku has an "ephemeral" filesystem and disposes of any data that users write to it via post. I don't know how to set up a PostgreSQL database or any other kind of database (although maybe I soon will, as I'm working through Hartl's Rails tutorial). I'm just using a humble YAML file. It works fine in my local environment.
Any suggestions for beginners to work around this problem, short of just learning how to host a database? Is there another free service I might use that would work without further setup? Any advice greatly welcome.
I fully understand that I can't do what I'm trying to do with Heroku (see e.g. questions like this one). I just want to understand my options better.
UPDATE: Looks like this and this might have some ideas about using Dropbox to host (read/write) flat files.
The answer is no. But I'll take a minute to explain why.
I realize that you aren't yet familiar with building web applications, databases, and all that stuff. And that's OK! This is an excellent question.
What you need to know, however, is that doing what you're asking is a really bad idea when you're trying to build scalable websites. And Heroku is a platform company that SPECIFICALLY tries to help developers building scalable websites. That's really what the platform excels at.
While Heroku is really easy to learn and use, it isn't targeted at beginners. It's meant for experienced developers. This is really clear if you take a look at what Heroku's principles are, and what policies they enforce on their platform.
Heroku goes out of their way to make building scalable websites really easy, and makes it VERY difficult to do things that would make building scalable websites harder.
So, let's talk for a second about why Heroku has an ephemeral file system in the first place!
This design decision forces you (the developer of the application) to store files that your application needs in a safer, faster, dedicated file storage service (like Amazon S3). This practice results in a lot of scalability benefits:
If your webservers don't need to write to disk, they can be deployed many many times without worrying about storage constraints.
No disks need to be shared across webservers. Sharing disks typically causes IO contention and can adversely affect performance.
It makes it easy to scale your web application horizontally across commodity servers, since disk resources aren't required.
So, the reason why you cannot store flat files on Heroku is because doing this causes scalability and performance problems, and would make it nearly impossible for Heroku to help you scale your application easily (which is their main goal).
That is why it is recommended to use a file storage service to store files (like Amazon S3), or a database for storing data (like Postgres).
What I'd recommend doing (personally) is using Heroku Postgres. You mentioned you're using rails, and rails has excellent Postgres support built in. It has what's called an ORM that let's you talk to the database using some very simple Ruby objects, and removes almost all the prerequisite database background to get things going. It's really fun / easy once you give it a try!
Finally: Heroku Postgres also has a great free plan, which means you can store the data for your todo app in it for no cost at all.
Hope this helps!

Is there any way to download a public DB to hard drive?

I'm a social science researcher, and I'm working with data from various public databases of NGO, government, etc. Let's assume that I've got no opportunity to ask the admins for the whole database. However, if I have enough patience, I'm able to download all the data one-by-one. But the size of the DB makes it almost impossible solving the problem with brute-force.
So, is there any way to download a public DB with all of it's components?
Here's an example:
http://www.trademap.org/tradestat/Country_SelProductCountry_TS.aspx
You can see the Japanese Live animal import (USD) by the importing countries. Is there a faster way to download all the data for every country and products as well than clicking them one-by-one?
Yes, there exist software and web services for scraping. You can find them easily with Google - this is a programming, not a software recommendations site.
Beware that the use of automatic downloading tools may violate the terms of service, and get you into legal trouble. Also, websites may block your access when accessing them too fast.

Device Strategy for mobile application testing

I am looking for device strategy , how should approach . I mean we are using real devices for testing. I need to present to my customer for if I should continue using real devices or use cloud based mobiles
1. I have team at one location only
2. I can use open source tools like appium for automation
I am not able to come to conclusion as there is security issue too with cloud based like perfecto. I know that they have private offerings but still customer is not convinced on security part. Please suggest any approach
For why should I use cloud and how much should I use it
This post might answer you question. It answers different questions on Mobile Application Testing.

suggested architecture / frameworks for collaborative mobile planning app?

I'm designing an app which needs to have some collaboration functionality. So 1 to many users can edit certain attributes of an event they plan together. e.g. the main-admin can change the title, picture etc. while all admin-users could change the date for example.
I would like to get some ideas how one would approach this in the modern world. Are there fancy frameworks etc.
Q: Is the best way storing it centraly on some server or would some peer to peer data storage work?
Q: My gut-feeling is that a web-application would probably be the easiest way, where people work on the object stored on the server instead of trying to sync a local copy with some central repository.
Is this correct?
Q: Are there mobile frameworks which could do the syncing, locking etc for me?
Thank you for some hints and suggestions. I know the questions are a bit broad, but I'm looking for directions not finished solutions. Thank you.
Kind regards
Fred
Some thoughts:
1a) There is no "best" way without a metric for better/best. But yes, having a server is almost certainly simpler, which is probably part of 'best' for most of us.
1b) Actually, there is always a server. Even p2p systems have clients and servers, it's just that every node is both a server and a client.
2) Yes, a web app would certain give you a lot of plumbing for free, & would probably be fastest/cheapest route to a working app. An alternative would be an olde worlde client/server database. A shinier approach might be mobile apps which use a webservice to communicate with a central server.
3) databases do that. But actually, if you use a web app it's probably not hard.
Analogies:
Apart from web apps, version control systems do exactly what you've
just described. they even do offline editting and subsequent merging.
Straightforward CRUD applications against a database also do what
you've just described.
But perhaps I'm under-estimating what you mean by collaboration?

how facebook design resolves the performance issue to fetch friend list?

I have a web design issue regarding to performance to ask advice. On a web site, there are many personalized information, for example the friends of a user of facebook. Personalized I mean different users have different friend list.
Suppose friend list is stored in database like Oracle or Mysql, each time the user clicks Home of his/her facebook page or login, we need to read database again. Each time the user add/remove friend, the database needs some update operations.
My question is, I think the performance capability (e.g. concurrency of transactions of read/write) of database is limited, and if facebook is using database to store friend list, it is hard to implement the good performance. But if not using database (e.g. MySql or Oracle), how did Facebook implement such personalization function?
This is a pretty good article about the technology behind facebook.
As Justin said, it looks like a combination of Memcached and Cassandra.
Facebook and other large sites typically use a caching layer to store that kind of data so that you don't have to make a round trip to the database each time you need to fetch it.
One of the most popular is Memcached (which, last I remember reading, is used by Facebook).
You could also check out how some sites are using NoSQL databases as their caching layer. I actually just read an article yesterday about how StackOverflow utilizes Redis to handle their caching.
From what I can gather they use a MySQL cluster and memcached and lots of custom written software. They open source plenty of it: http://developers.facebook.com/opensource/
the solution is to use a super-fast NoSQL-style database. Start with Simon Willison's excellent tutorial on redis, and it will all begin to become clear :)

Resources