How to create a Makefile.txt for React app? - reactjs

I am working on a take home project for a job interview where I have been tasked to create a react application with frontend and backend tests. They have instructed me that they will execute my code using "the make targets specified in Makefile(which they have provided. see later)". I have completed the application and test cases, however I am quite lost on this last step.
I understand that a makefile is used to execute essentially a script, but unsure of how to tell it to set up the application environment.
Here is a copy of the Makefile.txt
.PHONY: $(MAKECMDGOALS)
# `make setup` will be used after cloning or downloading to fulfill
# dependencies, and setup the the project in an initial state.
# This is where you might download rubygems, node_modules, packages,
# compile code, build container images, initialize a database,
# anything else that needs to happen before your server is started
# for the first time
setup:
# `make server` will be used after `make setup` in order to start
# an http server process that listens on any unreserved port
# of your choice (e.g. 8080).
server:
# `make test` will be used after `make setup` in order to run
# your test suite.
test:
Any help, understanding or clarification is appreciated.

I think it is about scripts that you have in package.json file - link, they probably want to have something like make setup script and all others that will run specific functionality for example if they type make test in terminal it will start running tests like npm test

Related

Appengine Flexible Logging with Pyramid

Migrating from Standard to Flexible appengine and trying to get logs to show up the same way they did under Standard. Example:
That is one single log entry, expanded. Shows what the URL is at the top, the http method, status and latency of the request. When expanded like it is, it shows all logs that were created during the request, their level, and where in the code it is. This makes it very easy to see what all happened during a request.
Under Flexible, none of this seems to happen. Calling logging.info() creates its own distinct log entry in Logging, with no information about what request/route it was triggered under:
As you can see, each log entry (and in the case of a fatal error, tracebacks) get their own individual log entry per line. Some digging in their api and documentation I was able to get it to a point where I can at least group them together somewhat, but it's still not where it used to be.
I don't get severity level at the "group" level of the log, only when expanded (which means filtering by severity isn't possible) nor do I get what line the logging entry was called at. This also means a lot more individual log entries, and I don't even know how this will affect log exports.
To group the logs, I'm passing Pyramid a custom logging handler which is just google's AppEngineHandler but overriding get_gae_labels to provide it with the correct trace ID header (out of the box, it only supports django, flask, and webapp2)
def get_gae_labels(self):
"""Return the labels for GAE app.
If the trace ID can be detected, it will be included as a label.
Currently, no other labels are included.
:rtype: dict
:returns: Labels for GAE app.
"""
gae_labels = {}
request = pyramid.threadlocal.get_current_request()
header = request.headers.get('X-Cloud-Trace-Context')
if header:
gae_labels[_TRACE_ID_LABEL] = header.split("/", 1)[0]
return gae_labels
From what I can gather, appengine Flexible runs nginx in front of my application, and that passes stderr logs to Logging, and its own nginx_request logs. Then, when my application calls logging.info(), it matches up a trace ID to group them together. Because of this, a few things seem to be happening.
A. It doesn't show the highest severity level of related log entries
B. When you expand the log entry, the related log entries don't appear instantly like they do under appengine Standard, they take a second to load in as presumably Logging is looking for related logs via trace ID. Under Standard, appengine provides Logging with a line entry which has some meta data like the log message, line number, source code location etc, so it doesn't need to go look for related log entries, it's all there from the beginning. See below
I'm not sure of a solution here (hence the post) and I wonder if this ultimately would be solved by expanding google's Logging api. It seems to me that the solution really is to stop nginx from logging anything and let Pyramid handle logging exclusively, as well as allowing me to send up data within line so Logging doesn't have to try and group requests by trace ID.
Custom runtime under Flexible, adding this in the yaml file:
runtime_config:
python_version: 3.7
And the dockerfile:
FROM gcr.io/google-appengine/python
# Create a virtualenv for dependencies. This isolates these packages from
# system-level packages.
# Use -p python3 or -p python3.7 to select python version. Default is version 2.
RUN virtualenv /env -p python3
# Setting these environment variables are the same as running
# source /env/bin/activate.
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
# Copy the application's requirements.txt and run pip to install all
# dependencies into the virtualenv.
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
# Add the application source code.
ADD . /app
# Run a WSGI server to serve the application. gunicorn must be declared as
# a dependency in requirements.txt.
RUN pip install -e .
CMD gunicorn -b :$PORT main:app
and requiresments.txt
pyramid
gunicorn
redis
google-cloud-tasks
googleapis-common-protos
google-cloud-ndb
google-cloud-logging

R Missing Package error when uploading to shinyapps.io

My Shiny program works fine locally on my PC (Windows 8, RStudio 0.99.489) but not when I upload it to shinyapps.io . I've tried 2 ways of packaging the data for upload - saveRDS on each object and save.image on the entire environment. Either way, when I upload it to shiny.io I get:
Preparing to deploy application...DONE
Uploading bundle for application: 77966...DONE
Deploying bundle: 350891 for application: 77966 ...
Waiting for task: 132618597
building: Parsing manifest
building: Building image: 344796
building: Installing packages
################################ Begin Task Log ################################
[2016-01-16T22:19:45.818533554+0000] Installing R package: magrittr (1.5)
Error in library(stylo) : there is no package called �stylo�
Execution halted
################################# End Task Log #################################
Error: Unhandled Exception: Child Task 132618599 failed: Error building image: Build exited with non-zero status: 1
Execution halted
It seems that Shiny isn't aware of the Stylo package. I tried to install it in my code, but that didn't help.
1. Does Shiny have all R packages?
2. If not, is there a list of which packages are available?
Thanks very much.
Are you including the stylo package at the top of your server.R file via library("stylo")? If you are doing that and it's giving you the error, try using require("stylo") instead.
From the docs, the rsconnect package is supposed to automatically detect what packages are necessary for your app. Probably worth a read (if you haven't already) just to be sure you're following everything correctly in order for rsconnect to do its job.
I solved the problem by doing a clean build of my environment -- imported data tables and functions from RDS files only, and carefully avoided references to unnecessary packages. I recreated the one function I needed from Stylo locally, so that I could be certain I wouldn't require it.

Supplying build info as qx.core.Environment entries

I have my qooxdoo project built and deployed by a CI server. Upon build, the server generates build info (version, VCS revision, CI build number, timestamp) that I would like to be passed to my qooxdoo app as qx.core.Environment keys.
At the moment, I have CI server generate a build.json file which is packaged together with the application, loaded at startup and converted to environment keys (by application code). This costs us an extra XHR.
On the other hand, I know that environment entries can be supplied during build, via config.json. Of course our build system can preprocess config.json to fill in environment entries, but I'm a bit skeptic of the idea of CI server fiddling with config.json. Is there any better solution? Is it possible to make generator script read environment entries from some auxiliary source?
I would write a #VERSION# tag into my script and at the end of the build process just search and replace this string in the compiled js file.
perl -i -p -e 's/#VERSION#/0.3.0/g' build/script/hello.js

Compile App Engine application in Travis

Is there any way to run the compiler on an App Engine application written in Go without continue to serve the application with the development server and instead get an exit code?
Because I want to add a check in my automated tests in Travis that the application actually compiles.
To clarify: I have access to the App Engine SDK / Development Server in Travis, but I dont want to run goapp serve since it never exits.
Without actually implementing test, your solution looks pretty hacky. Why not use goapp build? Here's my .travis.yml:
language: go
go:
- 1.2.1
# Grab newest version and suck down
install:
- export FILE=go_appengine_sdk_linux_amd64-$(curl https://appengine.google.com/api/updatecheck | grep release | grep -o '[0-9\.]*').zip
- curl -O https://storage.googleapis.com/appengine-sdks/featured/$FILE
- unzip -q $FILE
# Run build and tests
script:
- ./go_appengine/goapp test ./tests; # If you are testing
- ./go_appengine/goapp build ./packagedir; # Wherever you keep your stuff
For reference on tests or just to see a project that builds
Edit:
It has been awhile, but I noticed recently that some of my builds randomly break. It is infuriating and I have occasionally hardcoded SDK values to overcome this. No more. Here's a very hacky implementation of grabbing the first featured (and thus hosted as /updatecheck fails to always return a hosted version) of the SDK desired:
export FILE=$(curl https://storage.googleapis.com/appengine-sdks/ | grep -o 'featured/go_appengine_sdk_linux_amd64-[^\<]*' | head -1)
For just the file:
export FILE=$(curl https://storage.googleapis.com/appengine-sdks/ | grep -oP '(?<=featured/)go_appengine_sdk_linux_amd64-[^\<]*' | head -1)
I solved this by adding an empty Unit test at the entry point of the application (main_test.go). This unit test will force the whole application to compile.
Then I execute all unit tests by putting goapp test ./... in the script section.

Sencha Cmd broken: Why does 'app build' ignore any command line parameters?

I've upgraded Sencha cmd to v4 and I used to be able to build to a specific archive path and destination path. This was crucial as the build server removes the source code folder and archive path for each build. I had the paths output on the IIS server away from the build server, so that they are never lost.
However, my build process is failing now as the path parameters supplied to the sencha cmd don't do anything.
If I type:
Sencha help app build
I get the following help:
Syntax
sencha app build [options] [environment] \
[destination] \
[archive]
But supplying these parameters has no effect to the location of the output.
Can anyone point me to the documentation which shows if this has changed and how I rectify it. I can't find anything on the their site which shows how to build for production and have it output to separate paths. Also I'd like to know why the Sencha Tools change so much. This wreaks havoc on any existing build systems because things suddenly stop working.
See below:
C:\Development\Projects\IEApp>sencha app build --archive C:\temp\build\IEApp\bui
ldarchive --destination C:\temp\build\IEApp\Destination --environment production
Everything builds ok, but the C:\temp\build folder is empty.
I can not tell you where the doc is, but to get CMD to build to a different directory, this is what I do:
modify the file: .sencha\app\production.properties as follows:
# =============================================================================
# This file provides an override point for default variables defined in
# production.defaults.properties. These properties are only imported when building
# for the "production" environment.
#
# Properties defined in this file take priority over build.properties but are
# only loaded for "production" builds.
# =============================================================================
build.dir=${app.dir}/../../ExtJSApps/dashboard

Resources