Java Code Examples for org.nd4j.linalg.api.ndarray.INDArray#shape()

The following examples show how to use org.nd4j.linalg.api.ndarray.INDArray#shape() . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: ContentCostFunction.java    From Java-Machine-Learning-for-Computer-Vision with MIT License 6 votes vote down vote up
/**
 * Equation (2) from the Gatys et all paper: https://arxiv.org/pdf/1508.06576.pdf
 * This is the derivative of the content loss w.r.t. the combo image features
 * within a specific layer of the CNN.
 *
 * @param contentActivations Features at particular layer from the original content image
 * @param generatedActivations    Features at same layer from current combo image
 * @return Derivatives of content loss w.r.t. combo features
 */
public INDArray contentFunctionDerivative(INDArray contentActivations, INDArray generatedActivations) {

    generatedActivations = generatedActivations.dup();
    contentActivations = contentActivations.dup();

    double channels = generatedActivations.shape()[0];
    double w = generatedActivations.shape()[1];
    double h = generatedActivations.shape()[2];

    double contentWeight = 1.0 / (2 * (channels) * (w) * (h));
    // Compute the F^l - P^l portion of equation (2), where F^l = comboFeatures and P^l = originalFeatures
    INDArray diff = generatedActivations.sub(contentActivations);
    // This multiplication assures that the result is 0 when the value from F^l < 0, but is still F^l - P^l otherwise
    return flatten(diff.muli(contentWeight).muli(ensurePositive(generatedActivations)));
}
 
Example 2
Source File: OpExecutionerUtil.java    From nd4j with Apache License 2.0 6 votes vote down vote up
/** Can we do the op (X = Op(X)) directly on the arrays without breaking X up into 1d tensors first?
 * In general, this is possible if the elements of X are contiguous in the buffer, OR if every element
 * of X is at position offset+i*elementWiseStride in the buffer
 * */
public static boolean canDoOpDirectly(INDArray x) {
    if (x.elementWiseStride() < 1)
        return false;
    if (x.isVector())
        return true;

    //For a single NDArray all we require is that the elements are contiguous in the buffer or every nth element

    //Full buffer -> implies all elements are contiguous (and match)
    long l1 = x.lengthLong();
    long dl1 = x.data().length();
    if (l1 == dl1)
        return true;

    //Strides are same as a zero offset NDArray -> all elements are contiguous (even if not offset 0)
    long[] shape1 = x.shape();
    long[] stridesAsInit =
                    (x.ordering() == 'c' ? ArrayUtil.calcStrides(shape1) : ArrayUtil.calcStridesFortran(shape1));
    boolean stridesSameAsInit = Arrays.equals(x.stride(), stridesAsInit);
    return stridesSameAsInit;
}
 
Example 3
Source File: MultiTimestepRegressionExample.java    From dl4j-tutorials with MIT License 6 votes vote down vote up
/**
 * Used to create the different time series for ploting purposes
 */
private static XYSeriesCollection createSeries(XYSeriesCollection seriesCollection, INDArray data, int offset, String name) {
    long nRows = data.shape()[2];
    boolean predicted = name.startsWith("Predicted");
    long repeat = predicted ? data.shape()[1] : data.shape()[0];

    for (int j = 0; j < repeat; j++) {
        XYSeries series = new XYSeries(name + j);
        for (int i = 0; i < nRows; i++) {
            if (predicted)
                series.add(i + offset, data.slice(0).slice(j).getDouble(i));
            else
                series.add(i + offset, data.slice(j).getDouble(i));
        }
        seriesCollection.addSeries(series);
    }

    return seriesCollection;
}
 
Example 4
Source File: OpExecutionerUtil.java    From nd4j with Apache License 2.0 5 votes vote down vote up
/** Can we do the transform op (Z = Op(X,Y)) directly on the arrays without breaking them up into 1d tensors first? */
public static boolean canDoOpDirectly(INDArray x, INDArray y, INDArray z) {
    if (x.isVector())
        return true;
    if (x.ordering() != y.ordering() || x.ordering() != z.ordering())
        return false; //other than vectors, elements in f vs. c NDArrays will never line up
    if (x.elementWiseStride() < 1 || y.elementWiseStride() < 1)
        return false;
    //Full buffer + matching strides -> implies all elements are contiguous (and match)
    long l1 = x.lengthLong();
    long dl1 = x.data().length();
    long l2 = y.lengthLong();
    long dl2 = y.data().length();
    long l3 = z.lengthLong();
    long dl3 = z.data().length();
    long[] strides1 = x.stride();
    long[] strides2 = y.stride();
    long[] strides3 = z.stride();
    boolean equalStrides = Arrays.equals(strides1, strides2) && Arrays.equals(strides1, strides3);
    if (l1 == dl1 && l2 == dl2 && l3 == dl3 && equalStrides)
        return true;

    //Strides match + are same as a zero offset NDArray -> all elements are contiguous (and match)
    if (equalStrides) {
        long[] shape1 = x.shape();
        long[] stridesAsInit = (x.ordering() == 'c' ? ArrayUtil.calcStrides(shape1)
                        : ArrayUtil.calcStridesFortran(shape1));
        boolean stridesSameAsInit = Arrays.equals(strides1, stridesAsInit);
        return stridesSameAsInit;
    }

    return false;
}
 
Example 5
Source File: Adam.java    From deeplearning4j with Apache License 2.0 5 votes vote down vote up
@Override
public GradientUpdater instantiate(INDArray viewArray, boolean initializeViewArray) {
    AdamUpdater u = new AdamUpdater(this);
    long[] gradientShape = viewArray.shape();
    gradientShape = Arrays.copyOf(gradientShape, gradientShape.length);
    gradientShape[1] /= 2;
    u.setStateViewArray(viewArray, gradientShape, viewArray.ordering(), initializeViewArray);
    return u;
}
 
Example 6
Source File: OpExecutionerUtil.java    From nd4j with Apache License 2.0 5 votes vote down vote up
/** Can we do the transform op (X = Op(X,Y)) directly on the arrays without breaking them up into 1d tensors first? */
public static boolean canDoOpDirectly(INDArray x, INDArray y) {
    if (x.isVector())
        return true;
    if (x.ordering() != y.ordering())
        return false; //other than vectors, elements in f vs. c NDArrays will never line up
    if (x.elementWiseStride() < 1 || y.elementWiseStride() < 1)
        return false;
    //Full buffer + matching strides -> implies all elements are contiguous (and match)
    //Need strides to match, otherwise elements in buffer won't line up (i.e., c vs. f order arrays)
    long l1 = x.lengthLong();
    long dl1 = x.data().length();
    long l2 = y.lengthLong();
    long dl2 = y.data().length();
    long[] strides1 = x.stride();
    long[] strides2 = y.stride();
    boolean equalStrides = Arrays.equals(strides1, strides2);
    if (l1 == dl1 && l2 == dl2 && equalStrides)
        return true;

    //Strides match + are same as a zero offset NDArray -> all elements are contiguous (and match)
    if (equalStrides) {
        long[] shape1 = x.shape();
        long[] stridesAsInit = (x.ordering() == 'c' ? ArrayUtil.calcStrides(shape1)
                        : ArrayUtil.calcStridesFortran(shape1));
        boolean stridesSameAsInit = Arrays.equals(strides1, stridesAsInit);
        return stridesSameAsInit;
    }

    return false;
}
 
Example 7
Source File: ShufflesTests.java    From nd4j with Apache License 2.0 5 votes vote down vote up
public float[] measureState(INDArray data) {
    // for 3D we save 0 element for each slice.
    float[] result = new float[data.shape()[0]];

    for (int x = 0; x < data.shape()[0]; x++) {
        result[x] = data.slice(x).getFloat(0);
    }

    return result;
}
 
Example 8
Source File: ImageUtils.java    From dl4j-tutorials with MIT License 5 votes vote down vote up
public static BufferedImage toBufferedImage(INDArray data) {
    long[] shape = data.shape();
    int width = (int) shape[3];
    int heith = (int) shape[2];
    int chl = (int) shape[1];

    BufferedImage image = new BufferedImage(width, heith, BufferedImage.TYPE_3BYTE_BGR);

    int[] dataPixels = data.permute(0, 3, 2, 1).getRow(0).data().asInt();

    int[] pixels = new int[width * heith];

    for (int y = 0; y < heith; y++) {
        for (int x = 0; x < width; x++) {
            int[] rgb = new int[4];
            rgb[0] = 0xff;
            rgb[1] = dataPixels[y * width + x];
            if (chl > 1) {
                rgb[2] = dataPixels[y * width + x + pixels.length];
            }
            if (chl > 2) {
                rgb[3] = dataPixels[y * width + x + pixels.length * 2];
            }
            pixels[y * width + x] = rgb[0] << 24 | rgb[1] << 16 | rgb[2] << 8 | rgb[3];
        }
    }
    setRGB(image, 0, 0, width, heith, pixels);
    return image;
}
 
Example 9
Source File: EvaluationUtils.java    From deeplearning4j with Apache License 2.0 5 votes vote down vote up
public static INDArray reshapeTimeSeriesTo2d(INDArray labels) {
    val labelsShape = labels.shape();
    INDArray labels2d;
    if (labelsShape[0] == 1) {
        labels2d = labels.tensorAlongDimension(0, 1, 2).permutei(1, 0); //Edge case: miniBatchSize==1
    } else if (labelsShape[2] == 1) {
        labels2d = labels.tensorAlongDimension(0, 1, 0); //Edge case: timeSeriesLength=1
    } else {
        labels2d = labels.permute(0, 2, 1);
        labels2d = labels2d.reshape('f', labelsShape[0] * labelsShape[2], labelsShape[1]);
    }
    return labels2d;
}
 
Example 10
Source File: LayerOpValidation.java    From deeplearning4j with Apache License 2.0 5 votes vote down vote up
@Test
public void testConv2dBasic() {
    int nIn = 3;
    int nOut = 4;
    int kH = 2;
    int kW = 2;

    int mb = 3;
    int imgH = 28;
    int imgW = 28;

    SameDiff sd = SameDiff.create();
    INDArray wArr = Nd4j.create(kH, kW, nIn, nOut);
    INDArray bArr = Nd4j.create(1, nOut);
    INDArray inArr = Nd4j.create(mb, nIn, imgH, imgW);

    SDVariable in = sd.var("in", inArr);
    SDVariable w = sd.var("W", wArr);
    SDVariable b = sd.var("b", bArr);

    //Order: https://github.com/deeplearning4j/libnd4j/blob/6c41ea5528bb1f454e92a9da971de87b93ff521f/include/ops/declarable/generic/convo/conv2d.cpp#L20-L22
    //in, w, b - bias is optional

    Conv2DConfig c = Conv2DConfig.builder()
            .kH(kH).kW(kW)
            .pH(0).pW(0)
            .sH(1).sW(1)
            .dH(1).dW(1)
            .isSameMode(false)
            .build();

    SDVariable out = sd.cnn().conv2d("conv", in, w, b, c);
    out = sd.nn().tanh("out", out);

    INDArray outArr = out.eval();
    //Expected output size: out = (in - k + 2*p)/s + 1 = (28-2+0)/1+1 = 27
    val outShape = outArr.shape();
    assertArrayEquals(new long[]{mb, nOut, 27, 27}, outShape);
    // sd.execBackwards(); // TODO: test failing here
}
 
Example 11
Source File: AMSGrad.java    From deeplearning4j with Apache License 2.0 5 votes vote down vote up
@Override
public GradientUpdater instantiate(INDArray viewArray, boolean initializeViewArray) {
    AMSGradUpdater u = new AMSGradUpdater(this);
    long[] gradientShape = viewArray.shape();
    gradientShape = Arrays.copyOf(gradientShape, gradientShape.length);
    gradientShape[1] /= 3;
    u.setStateViewArray(viewArray, gradientShape, viewArray.ordering(), initializeViewArray);
    return u;
}
 
Example 12
Source File: CoverageModelWPreconditionerSpark.java    From gatk-protected with BSD 3-Clause "New" or "Revised" License 4 votes vote down vote up
@Override
    public INDArray operate(@Nonnull final INDArray W_tl)
            throws DimensionMismatchException {
        if (W_tl.rank() != 2 || W_tl.shape()[0] != numTargets || W_tl.shape()[1] != numLatents) {
            throw new DimensionMismatchException(W_tl.length(), numTargets * numLatents);
        }
        long startTimeRFFT = System.nanoTime();
        /* forward rfft */
        final INDArray W_kl = Nd4j.create(fftSize, numLatents);
        IntStream.range(0, numLatents).parallel().forEach(li ->
                W_kl.get(NDArrayIndex.all(), NDArrayIndex.point(li)).assign(
                        Nd4j.create(F_tt.getForwardFFT(W_tl.get(NDArrayIndex.all(), NDArrayIndex.point(li))),
                                new int[]{fftSize, 1})));
        long endTimeRFFT = System.nanoTime();

        /* apply the preconditioner in the Fourier space */
        long startTimePrecond = System.nanoTime();
        final Map<LinearlySpacedIndexBlock, INDArray> W_kl_map = CoverageModelSparkUtils.partitionINDArrayToMap(fourierSpaceBlocks, W_kl);
        final Broadcast<Map<LinearlySpacedIndexBlock, INDArray>> W_kl_bc = ctx.broadcast(W_kl_map);
        final JavaPairRDD<LinearlySpacedIndexBlock, INDArray> preconditionedWRDD = linOpPairRDD
                .mapToPair(p -> {
                    final INDArray W_kl_chuck = W_kl_bc.value().get(p._1);
                    final INDArray linOp_chunk = p._2;
                    final int blockSize = linOp_chunk.shape()[0];
                    final List<INDArray> linOpWList = IntStream.range(0, blockSize).parallel()
                            .mapToObj(k -> CoverageModelEMWorkspaceMathUtils.linsolve(linOp_chunk.get(NDArrayIndex.point(k)),
                                    W_kl_chuck.get(NDArrayIndex.point(k))))
                            .collect(Collectors.toList());
                    return new Tuple2<>(p._1, Nd4j.vstack(linOpWList));
                });
        W_kl.assign(CoverageModelSparkUtils.assembleINDArrayBlocksFromRDD(preconditionedWRDD, 0));
        W_kl_bc.destroy();
//        final JavaPairRDD<LinearlySpacedIndexBlock, INDArray> W_kl_RDD = CoverageModelSparkUtils.rddFromINDArray(W_kl,
//                fourierSpaceBlocks, ctx, true);
//        W_kl.assign(CoverageModelSparkUtils.assembleINDArrayBlocks(linOpPairRDD.join((W_kl_RDD))
//                .mapValues(p -> {
//                    final INDArray linOp = p._1;
//                    final INDArray W = p._2;
//                    final int blockSize = linOp.shape()[0];
//                    final List<INDArray> linOpWList = IntStream.range(0, blockSize).parallel().mapToObj(k ->
//                            CoverageModelEMWorkspaceMathUtils.linsolve(linOp.get(NDArrayIndex.point(k)),
//                                    W.get(NDArrayIndex.point(k))))
//                            .collect(Collectors.toList());
//                    return Nd4j.vstack(linOpWList);
//                }), false));
//        W_kl_RDD.unpersist();
        long endTimePrecond = System.nanoTime();

        /* irfft */
        long startTimeIRFFT = System.nanoTime();
        final INDArray res = Nd4j.create(numTargets, numLatents);
        IntStream.range(0, numLatents).parallel().forEach(li ->
                res.get(NDArrayIndex.all(), NDArrayIndex.point(li)).assign(
                        F_tt.getInverseFFT(W_kl.get(NDArrayIndex.all(), NDArrayIndex.point(li)))));
        long endTimeIRFFT = System.nanoTime();

        logger.debug("Local FFT timing: " + (endTimeRFFT - startTimeRFFT + endTimeIRFFT - startTimeIRFFT)/1000000 + " ms");
        logger.debug("Spark preconditioner application timing: " + (endTimePrecond - startTimePrecond)/1000000 + " ms");

        return res;
    }
 
Example 13
Source File: LinAlgExceptions.java    From nd4j with Apache License 2.0 4 votes vote down vote up
public static void assertMatrix(INDArray arr) {
    if (arr.shape().length > 2)
        throw new IllegalArgumentException("Array must be a matrix. Array has shape: " + Arrays.toString(arr.shape()));
}
 
Example 14
Source File: SameDiffTests.java    From nd4j with Apache License 2.0 4 votes vote down vote up
@Test
public void testConv3dBasic() {
    int nIn = 3;
    int nOut = 4;
    int kH = 2;
    int kW = 2;
    int kT = 2;

    int mb = 3;
    int imgH = 28;
    int imgW = 28;
    int imgT = 28;

    SameDiff sd = SameDiff.create();
    INDArray wArr = Nd4j.create(nOut, nIn, kT, kH, kW); //As per DL4J
    INDArray bArr = Nd4j.create(1, nOut);
    INDArray inArr = Nd4j.create(mb, nIn, imgT, imgH, imgW);

    SDVariable in = sd.var("in", inArr);
    SDVariable w = sd.var("W", wArr);
    SDVariable b = sd.var("b", bArr);

    //Order: https://github.com/deeplearning4j/libnd4j/blob/6c41ea5528bb1f454e92a9da971de87b93ff521f/include/ops/declarable/generic/convo/conv2d.cpp#L20-L22
    //in, w, b - bias is optional
    SDVariable[] vars = new SDVariable[]{in, w, b};

    Conv3DConfig conv3DConfig = Conv3DConfig.builder()
            .kH(kH).kW(kW).kT(kT)
            .dilationH(1).dilationW(1).dilationT(1)
            .isValidMode(false)
            .biasUsed(false)
            .build();

    SDVariable out = sd.conv3d(vars, conv3DConfig);
    out = sd.tanh("out", out);

    INDArray outArr = sd.execAndEndResult();
    //Expected output size: out = (in - k)/d + 1 = (28-2+0)/1+1 = 27
    val outShape = outArr.shape();
    assertArrayEquals(new long[]{mb, nOut, 27, 27, 27}, outShape);
}
 
Example 15
Source File: BasicTADManager.java    From nd4j with Apache License 2.0 4 votes vote down vote up
@Override
public Pair<DataBuffer, DataBuffer> getTADOnlyShapeInfo(INDArray array, int[] dimension) {
    if (dimension != null && dimension.length > 1)
        Arrays.sort(dimension);

    if (dimension == null)
        dimension = new int[] {Integer.MAX_VALUE};

    boolean isScalar = dimension == null || (dimension.length == 1 && dimension[0] == Integer.MAX_VALUE);

    // FIXME: this is fast triage, remove it later
    int targetRank = isScalar ? 2 : array.rank(); //dimensionLength <= 1 ? 2 : dimensionLength;
    long offsetLength = 0;
    long tadLength = 1;

    if(!isScalar)
        for (int i = 0; i < dimension.length; i++) {
            tadLength *= array.shape()[dimension[i]];
        }

    if(!isScalar)
        offsetLength = array.lengthLong() / tadLength;
    else
        offsetLength = 1;
    //     logger.info("Original shape info before TAD: {}", array.shapeInfoDataBuffer());
    //    logger.info("dimension: {}, tadLength: {}, offsetLength for TAD: {}", Arrays.toString(dimension),tadLength, offsetLength);

    DataBuffer outputBuffer = new CudaLongDataBuffer(targetRank * 2 + 4);
    DataBuffer offsetsBuffer = new CudaLongDataBuffer(offsetLength);

    AtomicAllocator.getInstance().getAllocationPoint(outputBuffer).tickHostWrite();
    AtomicAllocator.getInstance().getAllocationPoint(offsetsBuffer).tickHostWrite();

    DataBuffer dimensionBuffer = AtomicAllocator.getInstance().getConstantBuffer(dimension);
    Pointer dimensionPointer = AtomicAllocator.getInstance().getHostPointer(dimensionBuffer);

    Pointer xShapeInfo = AddressRetriever.retrieveHostPointer(array.shapeInfoDataBuffer());
    Pointer targetPointer = AddressRetriever.retrieveHostPointer(outputBuffer);
    Pointer offsetsPointer = AddressRetriever.retrieveHostPointer(offsetsBuffer);
    if(!isScalar)
        nativeOps.tadOnlyShapeInfo((LongPointer) xShapeInfo, (IntPointer) dimensionPointer, dimension.length,
                (LongPointer) targetPointer, new LongPointerWrapper(offsetsPointer));

    else  {
        outputBuffer.put(0,2);
        outputBuffer.put(1,1);
        outputBuffer.put(2,1);
        outputBuffer.put(3,1);
        outputBuffer.put(4,1);
        outputBuffer.put(5,0);
        outputBuffer.put(6,0);
        outputBuffer.put(7,99);

    }

    AtomicAllocator.getInstance().getAllocationPoint(outputBuffer).tickHostWrite();
    AtomicAllocator.getInstance().getAllocationPoint(offsetsBuffer).tickHostWrite();

    //   logger.info("TAD shapeInfo after construction: {}", Arrays.toString(TadDescriptor.dataBufferToArray(outputBuffer)));
    // now we need to copy this buffer to either device global memory or device cache

    return new Pair<>(outputBuffer, offsetsBuffer);

}
 
Example 16
Source File: ConvolutionalIterationListener.java    From deeplearning4j with Apache License 2.0 4 votes vote down vote up
/**
 * This method renders 1 convolution layer as set of patches + multiple zoomed images
 * @param tensor3D
 * @return
 */
private BufferedImage renderMultipleImagesLandscape(INDArray tensor3D, int maxHeight, int zoomWidth,
                int zoomHeight) {
    /*
        first we need to determine, weight of output image.
     */
    int border = 1;
    int padding_row = 2;
    int padding_col = 2;
    int zoomPadding = 20;

    val tShape = tensor3D.shape();

    val numColumns = tShape[0] / tShape[1];

    val width = (numColumns * (tShape[1] + border + padding_col)) + padding_col + zoomPadding + zoomWidth;

    BufferedImage outputImage = new BufferedImage((int) width, maxHeight, BufferedImage.TYPE_BYTE_GRAY);
    Graphics2D graphics2D = outputImage.createGraphics();

    graphics2D.setPaint(bgColor);
    graphics2D.fillRect(0, 0, outputImage.getWidth(), outputImage.getHeight());

    int columnOffset = 0;
    int rowOffset = 0;
    int numZoomed = 0;
    int limZoomed = 5;
    int zoomSpan = maxHeight / limZoomed;
    for (int z = 0; z < tensor3D.shape()[0]; z++) {

        INDArray tad2D = tensor3D.tensorAlongDimension(z, 2, 1);

        val rWidth = tad2D.shape()[0];
        val rHeight = tad2D.shape()[1];

        val loc_height = (rHeight) + (border * 2) + padding_row;
        val loc_width = (rWidth) + (border * 2) + padding_col;



        BufferedImage currentImage = renderImageGrayscale(tad2D);

        /*
            if resulting image doesn't fit into image, we should step to next columns
         */
        if (rowOffset + loc_height > maxHeight) {
            columnOffset += loc_width;
            rowOffset = 0;
        }

        /*
            now we should place this image into output image
        */

        graphics2D.drawImage(currentImage, columnOffset + 1, rowOffset + 1, null);


        /*
            draw borders around each image
        */

        graphics2D.setPaint(borderColor);
        if (tad2D.shape()[0] > Integer.MAX_VALUE || tad2D.shape()[1] > Integer.MAX_VALUE)
            throw new ND4JArraySizeException();
        graphics2D.drawRect(columnOffset, rowOffset, (int) tad2D.shape()[0], (int) tad2D.shape()[1]);



        /*
            draw one of 3 zoomed images if we're not on first level
        */

        if (z % 5 == 0 && // zoom each 5th element
                        z != 0 && // do not zoom 0 element
                        numZoomed < limZoomed && // we want only few zoomed samples
                        (rHeight != zoomHeight && rWidth != zoomWidth) // do not zoom if dimensions match
        ) {

            int cY = (zoomSpan * numZoomed) + (zoomHeight);

            graphics2D.drawImage(currentImage, (int) width - zoomWidth - 1, cY - 1, zoomWidth, zoomHeight, null);
            graphics2D.drawRect((int) width - zoomWidth - 2, cY - 2, zoomWidth, zoomHeight);

            // draw line to connect this zoomed pic with its original
            graphics2D.drawLine(columnOffset + (int) rWidth, rowOffset + (int) rHeight, (int) width - zoomWidth - 2,
                            cY - 2 + zoomHeight);
            numZoomed++;
        }

        rowOffset += loc_height;
    }
    return outputImage;
}
 
Example 17
Source File: ArrayDescriptor.java    From deeplearning4j with Apache License 2.0 4 votes vote down vote up
public ArrayDescriptor(INDArray array) throws Exception{
    this(array.data().address(), array.shape(), array.stride(), array.data().dataType(), array.ordering());
    if (array.isEmpty()){
        throw new UnsupportedOperationException("Empty arrays are not supported");
    }
}
 
Example 18
Source File: CpuTADManager.java    From nd4j with Apache License 2.0 4 votes vote down vote up
@Override
public Pair<DataBuffer, DataBuffer> getTADOnlyShapeInfo(INDArray array, int[] dimension) {
    if (dimension != null && dimension.length > 1)
        Arrays.sort(dimension);

    if (dimension == null || dimension.length >= 1 && dimension[0] == Integer.MAX_VALUE) {
        return new Pair<>(array.shapeInfoDataBuffer(), null);
    } else {
        TadDescriptor descriptor = new TadDescriptor(array, dimension);

        if (!cache.containsKey(descriptor)) {
            int dimensionLength = dimension.length;

            // FIXME: this is fast triage, remove it later
            int targetRank = array.rank(); //dimensionLength <= 1 ? 2 : dimensionLength;
            long offsetLength;
            long tadLength = 1;
            for (int i = 0; i < dimensionLength; i++) {
                tadLength *= array.shape()[dimension[i]];
            }

            offsetLength = array.lengthLong() / tadLength;

            DataBuffer outputBuffer = new LongBuffer(targetRank * 2 + 4);
            DataBuffer offsetsBuffer = new LongBuffer(offsetLength);

            DataBuffer dimensionBuffer = constantHandler.getConstantBuffer(dimension);
            Pointer dimensionPointer = dimensionBuffer.addressPointer();

            Pointer xShapeInfo = array.shapeInfoDataBuffer().addressPointer();
            Pointer targetPointer = outputBuffer.addressPointer();
            Pointer offsetsPointer = offsetsBuffer.addressPointer();

            nativeOps.tadOnlyShapeInfo((LongPointer) xShapeInfo, (IntPointer) dimensionPointer, dimension.length,
                            (LongPointer) targetPointer, new LongPointerWrapper(offsetsPointer));


            // If the line below will be uncommented, shapes from JVM will be used on native side
            //outputBuffer = array.tensorAlongDimension(0, dimension).shapeInfoDataBuffer();
            Pair<DataBuffer, DataBuffer> pair = new Pair<>(outputBuffer, offsetsBuffer);
            if (counter.get() < MAX_ENTRIES) {
                counter.incrementAndGet();
                cache.put(descriptor, pair);

                bytes.addAndGet((outputBuffer.length() * 4) + (offsetsBuffer.length() * 8));
            }
            return pair;
        }

        return cache.get(descriptor);
    }
}
 
Example 19
Source File: Nd4jIOUtils.java    From gatk-protected with BSD 3-Clause "New" or "Revised" License 3 votes vote down vote up
/**
 * Writes a tensor NDArray (rank >= 2) to a tab-separated file.
 *
 * A rank-D tensor is flattened along the last D - 1 dimensions (in 'c' order) and is written as a
 * matrix. The shape of the tensor is written as the first comment line for proper reshaping upon loading.
 *
 * The columns are named arbitrarily as "COL_0", "COL_1", ... "COL_L" where L = total tensor elements / first dimension.
 *
 * @param arr an arbitrary NDArray
 * @param outputFile output file
 * @param identifier an identifier string
 * @param rowNames list of row names
 */
public static void writeNDArrayTensorToTextFile(@Nonnull final INDArray arr,
                                                @Nonnull final File outputFile,
                                                @Nonnull final String identifier,
                                                @Nullable final List<String> rowNames) {
    Utils.nonNull(arr, "The NDArray to be written to file must be non-null");
    Utils.nonNull(outputFile, "The output file must be non-null");
    Utils.validateArg(arr.rank() >= 2, "The array must be at least rank-2");

    final int[] shape = arr.shape();
    final INDArray tensorToMatrix = arr.reshape('c', new int[] {shape[0], arr.length() / shape[0]});
    writeNDArrayMatrixToTextFile(tensorToMatrix, outputFile, identifier, rowNames, null, arr.shape());
}
 
Example 20
Source File: Utils.java    From wekaDeeplearning4j with GNU General Public License v3.0 2 votes vote down vote up
/**
 * Determines if the activations need reshaping
 * @param activationAtLayer Activations in question
 * @return true if the activations need reshaping (too high dimensionality)
 */
public static boolean needsReshaping(INDArray activationAtLayer) {
  return activationAtLayer.shape().length != 2;
}