I have Dropbox Plus installed both as an App and online. It turns out that I have very big data to handle and cannot store them in the App, otherwise a lot of memory would be lost. Instead I can sync them only online and have them stored in the dropbox servers.
Now, the problem is that I need to use those dta files directly from dropbox.com which apparently is not as easy as I was thinking. My naive approach was simply to copy paste the link provided by dropbox.com:
use "https://www.dropbox.com/s/cxmbo2gsw8yuoic/pcs.dta",clear
however it did not work. Does anyone know how can I do that in Stata please?
Related
I am working on an electron project to keep inventory of a warehouse but I want to store the data on the client-side (on the client's desktop/laptop) and not on a cloud database. How do I do this? Is using an xlsx file a good idea to store the data. As it will come with an added bonus as the user can read the data outside the app if they want to in an excel sheet.
P. S: even if xslx is a way I would like to know other possible ways so I can choose which is more comfortable for me. Thank you.
Edit: sorry I forgot to mention that I might also have to store images in the data.
You have plenty of option. You can store json file and read it when application boot up. As this is node js related thing I would suggest you to use electron store
And xlsx is a good choice but that may be overkill if the thing you are storing is too simple. On windows you can store some settings in registry too. But I prefer the config version.
I have also used sqlite3 database for some app. In Android I believe many app uses sqlite approach to store local database.
Changing MacDevices seems to confuse the RStudio. I want to always have the working directory be a folder on my external hard drive. Any tips?
I think you need to read a bit about .Rproj and how to use them.
R projects are a way to define your working directory and save memory depending on the project you openned. It allow to make different project without getting files together. Another advantage is that if all your data and script are within a directory linked to a Rproject, you can move it around and share it easily.
Here are some information on how to use them.
An other way of thinking could be modification of your .Rprofile so it setwd('where/you/work') at every R session. Some info on how to customize your .Rprofile. Note that there is drawbacks in this options, because your code may become not reproductible when you give it to someone else.
I hope someone have already faced an issue to verify that application shows correct data from database. I reviewd how groovy used SQL, but I have no idea where and how I should do that. I'm just starting to use gradle+Spock+Geb for testing application. I have a few files where I described a couple of pages from application, a couple of modules and a file with spock specification. Where and how I need to connect to Oracle DB, use SQL and compare result's data with application's ones?
P.S. I write everything in notepad++ and launch from command line writing 'gradlew firefoxTest'. Does exist any more comfortable way to work with gradle+spock+geb?
Thanks in advance.
Because there are no other answers, I wanted to provide a solution someone at my company thought of. This assumes you already have a project that uses some sort of JDBC. In our case it is JDBI.
The idea is to extend Classloader and then use that to directly access the data access object class via the JVM. That idea should work.
I have not tested it out because it doesn't completely fit our use-case. I'll admit that this does not completely apply to your use case, but technically you could just run the jar of an existing project, which can access the database.
So for backing up any/all my WordPress sites i use a tool called "BACKUP BUDDY" and its
a great tool and all but lately its been really buggy and today finally it went kaboom!
Usually my workflow is that i develop the site on my local machine using WAMP/MAMP.
when done and ready for testing i use the tool, move it to my personal test server to test and when happy and work is approved, i move to the real server.
Since my tool stopped working(uploads half the content) i decided to just do it manually by installing Wordpress first on the real webserver(done), Applying my theme(done),
then exporting the database sql from the local server(done), and thereafter importing it to the real server(done) and the 2xs that ive done it the site comes up blank.(outcome equals major fail!)
im assuming that something has to be changed/done in order for it to work but not sure what.
unlike a normal DB where i can talk to the info as normal, since WP is a CMS im assuming that it ties the info to the domain but again, i dont know how it 100% works...
Any ideas as to what im doing wrong? because as of now, if i cant do it like this, id have to manually create ALL the pages. Plus, if i was going to then move it from my real test server to final real destination then id have to manually redo it all again...
Thanks in advanced.
you aren't doing anything wrong. It sounds like your particular workflow could be as follows.
Upload the contents of the site via FTP
Create & Import the database via PHPMyAdmin, changing any info in wp-config.php
Define the site url, in wp-config.php [See below]
Use a tool to find & replace any hard-coded site-urls that wordpress loves to use. [See below]
Example code:
Define site urls
define('WP_HOME','http://example.com');
define('WP_SITEURL','http://example.com');
Find replace tool
Replace
http://localhost/
with
http://www.your-new-site.com/
That should be it. It's live!
You can export it using phpMyAdmin and then use bigdump to import it. download bigdump from here and make sure you read the first note about the exporting process, found here
http://www.ozerov.de/bigdump/usage/
here is a bash script you can use to automate this entire process for you: https://github.com/jplew/SyncDB
I have been working on a system to push changes from my git repository to a live site. The issue is that on my local box (where only I have access) I leave db credentials defaults; but I don't want them to be defaults on the web.
What would be the best solution to have a few files that are only located on each development computer, and are never uploaded/committed, etc. I was thinking of throwing in an example file if anyone clones it down, that way they'd know how to create the real credentials file.
I'm pretty new to git, and so I don't think I have the experience to really come up with a good solution for this, so any help would be great.
Thanks,
Max
Your idea of committing an examples file and then not actually tracking the real file is a good one.
Just put the name of the real file in .gitignore so that no one will add it by accident.