I have a little problem with cropping images in react native.
As you can see the example below
I want to crop the image inside the white rectangle, I don't know if I'm using the wrong formula or not
takePicture = async() => {
console.log("pic")
if (this.camera != null) {
const data = await this.camera.takePictureAsync();
/**
* Calcul
*/
const x_axis_scale = data.width / width
const y_axis_scale = data.height / height
var x_coord_int = 70 * x_axis_scale;
var y_coord_int = 120 * y_axis_scale;
var rect_width_int = 200 * x_axis_scale;
var rect_height_int = 70 * y_axis_scale
const res = await ImageEditor.cropImage(data.uri, {
offset: {x: x_coord_int, y: y_coord_int},
size: {
width:rect_width_int,
height: rect_height_int
}
})
this.setState({
imageCrop: res
})
}
};
It doesn'tt crop correcty.
Any help?
Sorry , i can't help about this issue . But there is a library react-native-image-crop-picker which can help with cropping issue . Hope this helps.
Related
I'm trying to create an array of meshes I added to my THREE.JS scene because I used a for-loop with the load() function to create the instances of the meshes, and I wanted to reference them later for the animate() function. The array shows a list of values in the console, but the length shows up as 0 when logged. This is the code I have:
const manager = new THREE.LoadingManager(); //loading manager
const loader = new GLTFLoader(manager);
let pos_arr = []
manager.onLoad = function(e) {
scene.traverse( function (model) {
if (model.isMesh) {
pos_arr.push(model)
}
})
}
let pos = new THREE.Vector3()
//spheres and equators
let ball = new THREE.Mesh();
let equator = new THREE.Mesh()
for (let i = 1; i <= 12; i++) {
let scale = Math.floor(Math.random() * 3) + 1
let pos_x = Math.floor(Math.random() * 10) * 5 - 25
let pos_y = Math.floor(Math.random() * 10) * 5 - 25
let pos_z = Math.floor(Math.random() * 10) * 5 - 25
pos = new THREE.Vector3(pos_x, pos_y, pos_z)
loader.load( './ball.gltf', function ( gltf ) { //load sphere
gltf.scene.traverse(function(model) {
if (model.isMesh) {
model.castShadow = true;
model.material = sphereMaterial;
}
});
ball = gltf.scene
ball.position.set(pos_x, pos_y, pos_z)
ball.scale.set(scale, scale, scale)
scene.add( ball );
}, undefined, function ( error ) {
console.error( error );
} );
loader.load( './equator.gltf', function ( gltf2 ) {
gltf2.scene.traverse(function(model) { //for gltf shadows!
if (model.isMesh) {
model.castShadow = true
model.material = equatorMaterial
}
});
equator = gltf2.scene
equator.position.set(pos_x, pos_y, pos_z)
equator.scale.set(scale, scale, scale)
scene.add( equator )
}, undefined, function ( error ) {
console.error( error )
} );
}
//light
const light = new THREE.AmbientLight( 0xffffff );
scene.add( light );
console.log(pos_arr)
console.log(pos_arr.length)
This is what the console shows:
I got a recommendation that I use the onLoad attribute for the LoadingManager, which I used, but it didn't work. If anyone happens to know a possible solution, it would be greatly appreciated :)
Framwork: React js.
Library: "raphael": "^2.2.8",
Description: canvas throw an Error: attribute d: Expected number, "….68028259277344CNaN,NaN,NaN,NaN,…".
console errors screenshot
http://jsfiddle.net/fzjc81ym/
this.canvas = Raphael('grid', '100%', '100%');
drawLine(this.canvas, path1, duration, arrowAtrr, color, strokeDasharray, strokeWidth, arrowend).then(() => this.resolve(item, callback))
const drawLine = (canvas, pathStr, duration = 1000, attr = arrowAtrr, color = Color.GREEN, strokeDasharray = '-', strokeWidth = 4, arrowend = "block-wide-long") => {
return new Promise((resolve) => {
attr.stroke = color;
attr['stroke-dasharray'] = strokeDasharray;
attr['stroke-width'] = strokeWidth;
attr['arrow-end'] = arrowend
var guidePath = canvas.path(pathStr).attr({ stroke: 'none', fill: 'none' });
var path = canvas.path(pathStr).attr({ stroke: 'none', fill: 'none' });
var totalLength = guidePath.getTotalLength(guidePath);
var startTime = new Date().getTime();
var intervalLength = 25;
var intervalId = setInterval(function () {
var elapsedTime = new Date().getTime() - startTime;
var thisLength = elapsedTime / duration * totalLength;
var subPathStr = guidePath.getSubpath(0, thisLength);
attr.path = subPathStr;
path.attr(attr)
path.animate(attr, intervalLength);
if (elapsedTime >= duration) {
clearInterval(intervalId);
resolve();
}
}, intervalLength);
});
}
it seems to happen when I use the arrow-end attribute
I didn't find an answer in other places
Maybe someone has any idea how to solve that error?
A solutions:
in my case, the arrow line was smaller than the arrow-end triangle,
so I set the min size of the path to be 11 (the length of the arrow-end).
var subPathStr = guidePath.getSubpath(0, Math.max(11, thisLength));
here the around code:
var subPathStr = guidePath.getSubpath(0, Math.max(11, thisLength));
attr.path = subPathStr;
path.attr(attr)
path.animate(attr, intervalLength);
You didnt really include enough Information for your problem but this may help.
https://www.npmjs.com/package/react-raphael
It looks like you need to call Raphael.Paper(width={300} height={300}) in order to set the width of a canvas.
I trained my model with Google Teachable Machines (Image) and inclueded the model into my Ionic Angular app. I loaded the model successfully and used the camera preview for predicting the class which is shown in the image from the camera.
The picture which is displayed in the canvas changes properly but the predict()-method returns the same result for every call.
import * as tmImage from '#teachablemachine/image';
...
async startPrediction() {
this.model = await tmImage.load(this.modelURL, this.metadataURL);
this.maxPredictions = this.model.getTotalClasses();
console.log('classes: ' + this.maxPredictions); //works properly
requestAnimationFrame(() => {
this.loop();
});
}
async loop() {
const imageAsBase64 = await this.cameraPreview.takeSnapshot({ quality: 60 });
const canvas = document.getElementById('output') as HTMLImageElement;
//image changes properly, I checked it with a canvas output
canvas.src = 'data:image/jpeg;base64,' + imageAsBase64;
const prediction = await this.model.predict(canvas);
for (let i = 0; i < this.maxPredictions; i++) {
const classPrediction =
prediction[i].className + ': ' + prediction[i].probability.toFixed(2);
//probability doesn't change, even if I hold the camera close over a trained image
}
requestAnimationFrame(() => {
this.loop();
});
}
The prediction result is e.g.: class1 = 0.34, class2 = 0.66 but doesn't change.
I hope you could help me to find my bug, thanks in advance!
The image has probably not yet been loaded before you are calling the prediction model. It has been discussed here and there
function load(url){
return new Promise((resolve, reject) => {
canvas.src = url
canvas.onload = () => {
resolve(canvas)
}
})
}
await load(base64Data)
// then the image can be used for prediction
I am working on a multipage pdf download using html2canvas and pdfmake.
I am able to download the pdf but page height, page width of the generated pdf is not proper and the resolution is low/blur. The code is as below. Please refer the screenshot attached herewith.
Thanks in advance.
Code I have used is:
html2canvas(document.getElementById('newId')).then(
canvas=>{
var image = canvas.toDataURL('image/png');
const PAGE_WIDTH = 500;
const PAGE_HEIGHT = 700;
const content = [];
var w=500;
var h=700;
function getPngDimensions (base64) {
const header = atob(base64.slice(22, 70)).slice(16, 24);
const uint8 = Uint8Array.from(header, c => c.charCodeAt(0));
const dataView = new DataView(uint8.buffer);
return {
width: dataView.getInt32(0),
height: dataView.getInt32(4)
};
}
const splitImage = (img, content, callback) => () => {
const canvas = document.createElement('canvas');
canvas.width = w*2;
canvas.height=h*2;
canvas.style.width=w+'px';
canvas.style.height=h+'px';
const ctx = canvas.getContext('2d');
ctx.scale(2,2);
const printHeight = img.height * PAGE_WIDTH / img.width;
for (let pages = 0; printHeight > pages * PAGE_HEIGHT; pages++) {
canvas.height = Math.min(PAGE_HEIGHT, printHeight - pages * PAGE_HEIGHT);
ctx.drawImage(img, 0, -pages * PAGE_HEIGHT, img.width, printHeight);
content.push({ image: canvas.toDataURL(), margin: [0, 5],width:PAGE_WIDTH });
}
callback();
};
function next () {
pdfMake.createPdf({ content }).open();
}
const { width, height } = getPngDimensions(image);
const printHeight = height * PAGE_WIDTH / width;
if (printHeight > PAGE_HEIGHT) {
const img = new Image();
img.onload = splitImage(img, content, next);
img.src = image;
return;
}
content.push({ image, margin: [0, 5], width: PAGE_WIDTH });
next();
}
);
Update:
I tried updating the width and height of the image formed by the canvas but increasing the width onl increases the pixel size and further truncates the right end of the dashboard.
const PAGE_WIDTH = 500;
const PAGE_HEIGHT = 700;
//some more code here as mentioned in the detailed snippet
const content = [];
var w=500;
var h=700;
Poor Pixel Display For Multi Page PDF Through Banana Dashboard
could you help me how can I get output (source of cropped image) via react-image-crop module?
Upload component looks like this:
class MyUpload extends Component {
constructor() {
super();
this.state = {
src: 'source-to-image',
crop: {
x: 10,
y: 10,
aspect: 9 / 16,
width: 100
}
}
}
onCropComplete = (crop, pixelCrop) => {
this.setState({
crop
})
};
render() {
return (
<ReactCrop
src={this.state.src}
onComplete={this.onCropComplete}
/>
);
} }
Method onCropComplete returns only coordinates, width and height of cropped image, not source. I would like to get blob file.
EDIT (working solution -- thanks for Mosè Raguzzini reply):
If someone has similiar problem, call getCropptedImg function from tutorial in your component and create url from returned Blob object, like this:
getCroppedImg(this.state.image, pixelCrop, 'preview.jpg')
.then((res) => {
const blobUrl = URL.createObjectURL(res);
console.log(blobUrl); // it returns cropped image in this shape of url: "blob:http://something..."
})
react-image-crop is not meant to be used to produce blobs, is only meant to crop images inline. Probably you need something like https://foliotek.github.io/Croppie/
UPDATE:
Check the section "What about showing the crop on the client?" at bottom of
https://www.npmjs.com/package/react-image-crop, the blob is available as hidden feature
/**
* #param {File} image - Image File Object
* #param {Object} pixelCrop - pixelCrop Object provided by react-image-crop
* #param {String} fileName - Name of the returned file in Promise
*/
function getCroppedImg(image, pixelCrop, fileName) {
const canvas = document.createElement('canvas');
canvas.width = pixelCrop.width;
canvas.height = pixelCrop.height;
const ctx = canvas.getContext('2d');
ctx.drawImage(
image,
pixelCrop.x,
pixelCrop.y,
pixelCrop.width,
pixelCrop.height,
0,
0,
pixelCrop.width,
pixelCrop.height
);
// As Base64 string
// const base64Image = canvas.toDataURL('image/jpeg');
// As a blob
return new Promise((resolve, reject) => {
canvas.toBlob(file => {
file.name = fileName;
resolve(file);
}, 'image/jpeg');
});
}
async test() {
const croppedImg = await getCroppedImg(image, pixelCrop, returnedFileName);
}