I would like to deploy a react application on a web server on an esp32 micro controller, to control an api on that same micro controller.
The web server is working and can send files and receive requests. The only real problem is that file names of react apps are too long (i.e. ./build/static/js/988.78dc5abd.chunk.js), while the file system on an esp32 is limited to file names no longer than 31 characters.
I tried reducing the file names by editing webpack.config.js, but that doesn't appear to work anymore. I also tried bundling it in a single file which I also could not figure out. Increasing the file name limit also seems impossible.
Does anyone have an idea of how I could deploy a react app on a file system that is limited to file names with 32 characters?
EDIT: The actual best way is to create a custom react webpack and make a tarball with the result
I created a pretty terrible solution to this problem, so if you came across this post, ensure you have exhausted all other options before you attempt to copy this:
Basically I have created a script takes all the files recursively from the react app build directory (rapp/build) and copies them all to the data folder with a number and the correct extension (so the browser picks up the file type):
#!/bin/bash
cd rapp/build
i=0
#clear index and data folder
rm -rf ../../data/*
> ../../data/index
#grab all files and assign number
for f in $(find . -type f -printf '%P\n');
do
#pretty output
RED='\033[0;31m'
NC='\033[0m' # No Color
#grab extension
filename="${f##*/}"
extension="${filename##*.}"
#copy file with number
cp $f "../../data/$i.$extension"
#add original to index
echo $f >> ../../data/index
#add copy to index
echo $i.$extension >> ../../data/index
echo -e $i.$extension ${RED} mapped to ${NC} $f
i=$((i+ 1))
done
then i have created a web server that will automatically redirect all the request to the copied numbered files:
#include "WiFi.h"
#include "SPIFFS.h"
#include "ESPAsyncWebServer.h"
#include <string>
const char* ssid = "abcdef";
const char* password = "";
AsyncWebServer server(80);
void mapRedirect(){
File file = SPIFFS.open("/index");
if (!file) {
Serial.println("Failed to open file for reading");
return;
}
Serial.println("Contents of file:");
int i=0;
while (file.available()) {
String orig=file.readStringUntil('\n');
String cop=file.readStringUntil('\n');
Serial.print(cop);
Serial.print("\tmapped to\t");
Serial.println(orig);
server.on(String("/"+orig).c_str(), HTTP_GET, [cop](AsyncWebServerRequest *request){
request->redirect("/"+String(cop));
}
);
i++;
}
file.close();
}
void setup(){
Serial.begin(115200);
if(!SPIFFS.begin(true)){
Serial.println("An Error has occurred while mounting SPIFFS");
return;
}
WiFi.softAP(ssid,password);
server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){
request->redirect("/index.html");
});
server.serveStatic("/",SPIFFS,"/");
//redirect react files to coressponding mappings (spiffs character file name limit)
mapRedirect();
server.onNotFound([](AsyncWebServerRequest *request){
request->send(404, "text/plain", "The content you are looking for was not found.");
});
server.begin();
}
void loop(){}
Related
Is there any way to list out files from Hadoop hdfs and store only the file names to the local?
example:
I have a file india_20210517_20210523.csv. I m currently copying the files from hdfs to local using copytolocal command but copying files to local is time-consuming as files are huge. All I need is the name of the files to be stored in a .txt file to perform cut operations using bash script.
Kindly help me
The easiest way to do is to use the below command.
hdfs dfs -ls /path/fileNames | awk '{print $8}' | xargs -n 1 basename > Output.txt
How it works:
hdfs dfs -ls : This will list all the information about the path
awk '{print $8}' : To print the 8th column of the output
xargs -n 1 basename : To get the file names alone excluding the path
> Output.txt : To store the file names to a text file
Hope this answers your question.
If you want to do this programmatically, you can use FileSystem and FileStatus objects from Hadoop to:
list the contents of your (current or another) target directory,
check if each of the records of this directory is either a file or another directory, and
write the name of each file as a new line to a file stored locally.
The code for this type of application can look like this:
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.*;
import java.io.File;
import java.io.PrintWriter;
public class Dir_ls
{
public static void main(String[] args) throws Exception
{
// get input directory as a command-line argument
Path inputDir = new Path(args[0]);
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
if(fs.exists(inputDir))
{
// list directory's contents
FileStatus[] fileList = fs.listStatus(inputDir);
// create file and its writer
PrintWriter pw = new PrintWriter(new File("output.txt"));
// scan each record of the contents of the input directory
for(FileStatus file : fileList)
{
if(!file.isDirectory()) // only take into account files
{
System.out.println(file.getPath().getName());
pw.write(file.getPath().getName() + "\n");
}
}
pw.close();
}
else
System.out.println("Directory named \"" + args[0] + "\" doesn't exist.");
}
}
So if we want to list the files from the root (.) directory of HDFS, and we have these as the contents under it (notice how we both have directories and text files):
This will be the command line output of the application:
And this will be what's written inside the output.txt text file stored locally:
My Jenkins pipeline runs on the Slave using agent { node { label 'slave_node1' } }.
I use Jenkins file parameter named uploaded_file and upload a file called hello.pdf
My pipeline contains the following code
stage('Precheck')
{
steps {
sh "echo ${WORKSPACE}"
sh "echo ${uploaded_file}
sh "ls -ltr ${WORKSPACE}/*"
Output:
/web/jenkins/workspace/MYCOPY
hello.pdf
ls: cannot access /web/jenkins/workspace/MYCOPY/* No such file or directory
As you can see no files were found on the slave WORKSPACE.
Can you let me understand if I'm checking for the uploaded file in the correct location i.e under WORKSPACE directory?
How can I get the file uploaded to the slave's WORKSPACE?
I'm on jenkins version 2.249.1
Can I get this to work at least on the latest version of Jenkins ?
So do you have a fixed file that that is copied in every build? I.e its the same file?
In that case you can save it as a secret file in jenkins and do the following:
environment {
FILE = credentials('my_file')
}
stages {
stage('Preperation'){
steps {
// Copy your fie to the workspace
sh "cp ${FILE} ${WORKSPACE}"
// Verify the file is copied.
sh "ls -la"
}
}
}
I have a machine that is misbehaving (dns and thus clearcase isn't working at the moment). I was hoping to access the checked out files I had in that view (and a few other view private files) and start over my work on another machine while I wait for the IT admin guys to come back to work tomorrow.
Is is possible to get at my checked out files from just the view storage directory (i.e. ~/views/peeterj_gcc6.vws/...)?
i.e. find in the viewstorage dir shows lots of paths that are surely my view private files:
./.s/00019/8000149553ab76a5fontconfig.Turbo.bfc
./.s/00019/80003d3353ac5afftestinc_Subpool.compilecmd
./.s/00019/8000445a53ac65b3sqlnlscnvtbls6-LE.u
./.s/00019/8000045e53ab62eccdeSystemPageInterface.hpp
./.s/00019/8000556053ac934ftestinc_sqlhhid.C
but I'm not sure how to map from these to the original file names within the view.
EDIT:
I was able to brute force this task, where ~/tmp/f2 contained a list of the files of interest:
cd ~/views/peeterj_gcc6.vws/
for i in `cat ~/tmp/f2` ; do echo $i `find . -name "*$i"` ; done | grep ' ' | f.pl
where f.pl is the following perl filter:
#!/usr/bin/perl
use strict ;
use warnings ;
my $vsdir = "$ENV{HOME}/views/peeterj_gcc6.vws" ;
while (<>)
{
chomp ;
my ($f, #rest) = split( / /, $_ ) ;
my #match = () ;
foreach my $p (#rest)
{
if ( $p =~ m,/[0-9a-f]+$f$, )
{
push( #match, $p ) ;
goto DONE ; # hack. Just pick first.
}
}
if ( scalar(#match) )
{
DONE:
print "cp $vsdir/#match $f\n" ;
}
}
So, I'll re-pose the question: Is there a way to systematically map the names of the files in the view storage directory to the paths that they would be in in the view when clearcase is functional?
Is there a way to systematically map the names of the files in the view storage directory to the paths that they would be in in the view when ClearCase is functional?
Not really consistently, not even for their name.
If you look at the IBM technote "Locating view private files in the storage directory", their advice is:
Go into the .s sub-directory
Located under this directory are many numbered directories.
Browse through the numbered directories, searching for the view-private file.
All the files that are listed in these directories are view-private files. The file names of the files will be preceded by an ID number.
Example:
The view private file, help.txt, in the directory under .s, is named
241ae3df.000c.help.txt
Note: View private files that have been renamed in the view are not renamed in the view storage directory.
For instance, if you create a view private file called help.txt, and then you rename it to new.txt, the physical file in the view storage directory would still be named 241ae3df.000c.help.txt
So if you had another working view, you could try and copy the files you find in the old view storage one in a similar path in the new view storage, and see if that works.
I am using Node.js.
I want to check if folder is empty or not? One option is to use fs.readdir but it loads whole bunch of files into an array.
I have more than 10000 files in the folder. Loading files name is useless just to check if folder is empty or not. So looking for alternate solution.
How about using nodes native fs module http://nodejs.org/api/fs.html#fs_fs_readdir_path_callback. It's readdir and readdirSync functions provide you with an array of all the included file names (excluding . and ..). If the length is 0 then your directory is empty.
This is an ugly hack but I'll throw it out there anyway. You could just call fs.rmdir on the directory. If the callback returns an error which contains code: 'ENOTEMPTY', it was not empty. If it succeeds then you can call fs.mkdir and replace it. This solution probably only makes sense if your script was the one which created the directory in the first place, has the proper permissions, etc.
You can execute any *nix shell command from within NodeJS by using exec(). So for this you can use the good old 'ls -A ${folder} | wc -l' command (which lists all files/directories contained within ${folder} hiding the entries for the current directory (.) and parent directory (..) from the output which you want to exclude from the count, and counting their number).
For example in case ./tmp contains no files/directories below will show 'Directory ./tmp is empty.'. Otherwise, it will show the number of files/directories that it contains.
var dir = './tmp';
exec( 'ls -A ' + dir + ' | wc -l', function (error, stdout, stderr) {
if( !error ){
var numberOfFilesAsString = stdout.trim();
if( numberOfFilesAsString === '0' ){
console.log( 'Directory ' + dir + ' is empty.' );
}
else {
console.log( 'Directory ' + dir + ' contains ' + numberOfFilesAsString + ' files/directories.' );
}
}
else {
throw error;
}
});
Duplicate from my answer in how to determine whether the directory is empty directory with nodejs
There is the possibility of using the opendir method call that creates an iterator for the directory.
This will remove the need to read all the files and avoid the potential memory & time overhead
import {promises as fsp} from "fs"
const dirIter = await fsp.opendir(_folderPath);
const {value,done} = await dirIter[Symbol.asyncIterator]().next();
await dirIter.close()
The done value would tell you if the directory is empty
What about globbing? ie, exists myDir/*. It is not supported out of box by node (TOW v0.10.15), but bunch of modules will do that for you, like minimatch
Just like to add that there's a node module extfs which can be used to check if a directory is empty using the function isEmpty() as shown by the code snippet below:
var fs = require('extfs');
fs.isEmpty('/home/myFolder', function (empty) {
console.log(empty);
});
Check out the link for documentation regarding the synchronous version of this function.
While using the appcfg.py request_logs, it shows "copy the download logs to [the output file path]". Where would be the location google app engine used to store the temp file?
EDIT1
While using appcfg.py request_logs, I noticed that the first program will first download logs to a temp place then it copy these files to the user specify output file . I am looking for where the data stored before it has been copy to the target log file.
I'm not sure I've understood your question, but if you run a command like this from your app engine project directory (where your app.yaml file is):
appcfg.py request_logs . logs.out
then the output will end up in the file logs.out in that same directory (your project directory).
I found out GAE firstly using standard tempfile to store logs at first. Then it copy them to the desired output file location.
def DownloadLogs(self):
"""Download the requested logs.
This will write the logs to the file designated by
self.output_file, or to stdout if the filename is '-'.
Multiple roundtrips to the server may be made.
"""
StatusUpdate('Downloading request logs for %s %s.' %
(self.config.application, self.version_id))
tf = tempfile.TemporaryFile()
last_offset = None
try:
while True:
try:
new_offset = self.RequestLogLines(tf, last_offset)
if not new_offset or new_offset == last_offset:
break
last_offset = new_offset
except KeyboardInterrupt:
StatusUpdate('Keyboard interrupt; saving data downloaded so far.')
break
StatusUpdate('Copying request logs to %r.' % self.output_file)
if self.output_file == '-':
of = sys.stdout
else:
try:
of = open(self.output_file, self.write_mode)
except IOError, err:
StatusUpdate('Can\'t write %r: %s.' % (self.output_file, err))
sys.exit(1)
try:
line_count = CopyReversedLines(tf, of)
finally:
of.flush()
if of is not sys.stdout:
of.close()
finally:
tf.close()
StatusUpdate('Copied %d records.' % line_count)