Java Code Examples for com.google.android.exoplayer2.util.Util#scaleLargeTimestamp()

The following examples show how to use com.google.android.exoplayer2.util.Util#scaleLargeTimestamp() . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: DashManifestParser.java    From Telegram-FOSS with GNU General Public License v2.0 6 votes vote down vote up
/**
 * Parses a single Event node in the manifest.
 *
 * @param xpp The current xml parser.
 * @param schemeIdUri The schemeIdUri of the parent EventStream.
 * @param value The schemeIdUri of the parent EventStream.
 * @param timescale The timescale of the parent EventStream.
 * @param scratchOutputStream A {@link ByteArrayOutputStream} that is used when parsing event
 *     objects.
 * @return A pair containing the node's presentation timestamp in microseconds and the parsed
 *     {@link EventMessage}.
 * @throws XmlPullParserException If there is any error parsing this node.
 * @throws IOException If there is any error reading from the underlying input stream.
 */
protected Pair<Long, EventMessage> parseEvent(
    XmlPullParser xpp,
    String schemeIdUri,
    String value,
    long timescale,
    ByteArrayOutputStream scratchOutputStream)
    throws IOException, XmlPullParserException {
  long id = parseLong(xpp, "id", 0);
  long duration = parseLong(xpp, "duration", C.TIME_UNSET);
  long presentationTime = parseLong(xpp, "presentationTime", 0);
  long durationMs = Util.scaleLargeTimestamp(duration, C.MILLIS_PER_SECOND, timescale);
  long presentationTimesUs = Util.scaleLargeTimestamp(presentationTime, C.MICROS_PER_SECOND,
      timescale);
  String messageData = parseString(xpp, "messageData", null);
  byte[] eventObject = parseEventObject(xpp, scratchOutputStream);
  return Pair.create(
      presentationTimesUs,
      buildEvent(
          schemeIdUri,
          value,
          id,
          durationMs,
          messageData == null ? eventObject : Util.getUtf8Bytes(messageData)));
}
 
Example 2
Source File: SsManifest.java    From Telegram-FOSS with GNU General Public License v2.0 6 votes vote down vote up
public StreamElement(String baseUri, String chunkTemplate, int type, String subType,
    long timescale, String name, int maxWidth, int maxHeight, int displayWidth,
    int displayHeight, String language, Format[] formats, List<Long> chunkStartTimes,
    long lastChunkDuration) {
  this(
      baseUri,
      chunkTemplate,
      type,
      subType,
      timescale,
      name,
      maxWidth,
      maxHeight,
      displayWidth,
      displayHeight,
      language,
      formats,
      chunkStartTimes,
      Util.scaleLargeTimestamps(chunkStartTimes, C.MICROS_PER_SECOND, timescale),
      Util.scaleLargeTimestamp(lastChunkDuration, C.MICROS_PER_SECOND, timescale));
}
 
Example 3
Source File: DashManifestParser.java    From TelePlus-Android with GNU General Public License v2.0 6 votes vote down vote up
/**
 * Parses a single Event node in the manifest.
 *
 * @param xpp The current xml parser.
 * @param schemeIdUri The schemeIdUri of the parent EventStream.
 * @param value The schemeIdUri of the parent EventStream.
 * @param timescale The timescale of the parent EventStream.
 * @param scratchOutputStream A {@link ByteArrayOutputStream} that is used when parsing event
 *     objects.
 * @return The {@link EventMessage} parsed from this EventStream node.
 * @throws XmlPullParserException If there is any error parsing this node.
 * @throws IOException If there is any error reading from the underlying input stream.
 */
protected EventMessage parseEvent(
    XmlPullParser xpp,
    String schemeIdUri,
    String value,
    long timescale,
    ByteArrayOutputStream scratchOutputStream)
    throws IOException, XmlPullParserException {
  long id = parseLong(xpp, "id", 0);
  long duration = parseLong(xpp, "duration", C.TIME_UNSET);
  long presentationTime = parseLong(xpp, "presentationTime", 0);
  long durationMs = Util.scaleLargeTimestamp(duration, 1000, timescale);
  long presentationTimesUs = Util.scaleLargeTimestamp(presentationTime, C.MICROS_PER_SECOND,
      timescale);
  byte[] eventObject = parseEventObject(xpp, scratchOutputStream);
  return buildEvent(schemeIdUri, value, id, durationMs, eventObject, presentationTimesUs);
}
 
Example 4
Source File: SsManifest.java    From Telegram-FOSS with GNU General Public License v2.0 6 votes vote down vote up
/**
 * @param majorVersion The client manifest major version.
 * @param minorVersion The client manifest minor version.
 * @param timescale The timescale of the media as the number of units that pass in one second.
 * @param duration The overall presentation duration in units of the timescale attribute, or 0 if
 *     the duration is unknown.
 * @param dvrWindowLength The length of the trailing window in units of the timescale attribute,
 *     or 0 if this attribute is unspecified or not applicable.
 * @param lookAheadCount The number of fragments in a lookahead, or {@link #UNSET_LOOKAHEAD} if
 *     this attribute is unspecified or not applicable.
 * @param isLive True if the manifest describes a live presentation still in progress. False
 *     otherwise.
 * @param protectionElement Content protection information, or null if the content is not
 *     protected.
 * @param streamElements The contained stream elements.
 */
public SsManifest(
    int majorVersion,
    int minorVersion,
    long timescale,
    long duration,
    long dvrWindowLength,
    int lookAheadCount,
    boolean isLive,
    ProtectionElement protectionElement,
    StreamElement[] streamElements) {
  this(
      majorVersion,
      minorVersion,
      duration == 0
          ? C.TIME_UNSET
          : Util.scaleLargeTimestamp(duration, C.MICROS_PER_SECOND, timescale),
      dvrWindowLength == 0
          ? C.TIME_UNSET
          : Util.scaleLargeTimestamp(dvrWindowLength, C.MICROS_PER_SECOND, timescale),
      lookAheadCount,
      isLive,
      protectionElement,
      streamElements);
}
 
Example 5
Source File: DashManifestParser.java    From MediaSDK with Apache License 2.0 6 votes vote down vote up
/**
 * Parses a single Event node in the manifest.
 *
 * @param xpp The current xml parser.
 * @param schemeIdUri The schemeIdUri of the parent EventStream.
 * @param value The schemeIdUri of the parent EventStream.
 * @param timescale The timescale of the parent EventStream.
 * @param scratchOutputStream A {@link ByteArrayOutputStream} that is used when parsing event
 *     objects.
 * @return A pair containing the node's presentation timestamp in microseconds and the parsed
 *     {@link EventMessage}.
 * @throws XmlPullParserException If there is any error parsing this node.
 * @throws IOException If there is any error reading from the underlying input stream.
 */
protected Pair<Long, EventMessage> parseEvent(
    XmlPullParser xpp,
    String schemeIdUri,
    String value,
    long timescale,
    ByteArrayOutputStream scratchOutputStream)
    throws IOException, XmlPullParserException {
  long id = parseLong(xpp, "id", 0);
  long duration = parseLong(xpp, "duration", C.TIME_UNSET);
  long presentationTime = parseLong(xpp, "presentationTime", 0);
  long durationMs = Util.scaleLargeTimestamp(duration, C.MILLIS_PER_SECOND, timescale);
  long presentationTimesUs = Util.scaleLargeTimestamp(presentationTime, C.MICROS_PER_SECOND,
      timescale);
  String messageData = parseString(xpp, "messageData", null);
  byte[] eventObject = parseEventObject(xpp, scratchOutputStream);
  return Pair.create(
      presentationTimesUs,
      buildEvent(
          schemeIdUri,
          value,
          id,
          durationMs,
          messageData == null ? eventObject : Util.getUtf8Bytes(messageData)));
}
 
Example 6
Source File: SegmentBase.java    From MediaSDK with Apache License 2.0 5 votes vote down vote up
/** @see DashSegmentIndex#getTimeUs(long) */
public final long getSegmentTimeUs(long sequenceNumber) {
  long unscaledSegmentTime;
  if (segmentTimeline != null) {
    unscaledSegmentTime =
        segmentTimeline.get((int) (sequenceNumber - startNumber)).startTime
            - presentationTimeOffset;
  } else {
    unscaledSegmentTime = (sequenceNumber - startNumber) * duration;
  }
  return Util.scaleLargeTimestamp(unscaledSegmentTime, C.MICROS_PER_SECOND, timescale);
}
 
Example 7
Source File: XingSeeker.java    From Telegram-FOSS with GNU General Public License v2.0 5 votes vote down vote up
/**
 * Returns a {@link XingSeeker} for seeking in the stream, if required information is present.
 * Returns {@code null} if not. On returning, {@code frame}'s position is not specified so the
 * caller should reset it.
 *
 * @param inputLength The length of the stream in bytes, or {@link C#LENGTH_UNSET} if unknown.
 * @param position The position of the start of this frame in the stream.
 * @param mpegAudioHeader The MPEG audio header associated with the frame.
 * @param frame The data in this audio frame, with its position set to immediately after the
 *     'Xing' or 'Info' tag.
 * @return A {@link XingSeeker} for seeking in the stream, or {@code null} if the required
 *     information is not present.
 */
public static @Nullable XingSeeker create(
    long inputLength, long position, MpegAudioHeader mpegAudioHeader, ParsableByteArray frame) {
  int samplesPerFrame = mpegAudioHeader.samplesPerFrame;
  int sampleRate = mpegAudioHeader.sampleRate;

  int flags = frame.readInt();
  int frameCount;
  if ((flags & 0x01) != 0x01 || (frameCount = frame.readUnsignedIntToInt()) == 0) {
    // If the frame count is missing/invalid, the header can't be used to determine the duration.
    return null;
  }
  long durationUs = Util.scaleLargeTimestamp(frameCount, samplesPerFrame * C.MICROS_PER_SECOND,
      sampleRate);
  if ((flags & 0x06) != 0x06) {
    // If the size in bytes or table of contents is missing, the stream is not seekable.
    return new XingSeeker(position, mpegAudioHeader.frameSize, durationUs);
  }

  long dataSize = frame.readUnsignedIntToInt();
  long[] tableOfContents = new long[100];
  for (int i = 0; i < 100; i++) {
    tableOfContents[i] = frame.readUnsignedByte();
  }

  // TODO: Handle encoder delay and padding in 3 bytes offset by xingBase + 213 bytes:
  // delay = (frame.readUnsignedByte() << 4) + (frame.readUnsignedByte() >> 4);
  // padding = ((frame.readUnsignedByte() & 0x0F) << 8) + frame.readUnsignedByte();

  if (inputLength != C.LENGTH_UNSET && inputLength != position + dataSize) {
    Log.w(TAG, "XING data size mismatch: " + inputLength + ", " + (position + dataSize));
  }
  return new XingSeeker(
      position, mpegAudioHeader.frameSize, durationUs, dataSize, tableOfContents);
}
 
Example 8
Source File: AtomParsers.java    From TelePlus-Android with GNU General Public License v2.0 5 votes vote down vote up
/**
 * Parses a trak atom (defined in 14496-12).
 *
 * @param trak Atom to decode.
 * @param mvhd Movie header atom, used to get the timescale.
 * @param duration The duration in units of the timescale declared in the mvhd atom, or
 *     {@link C#TIME_UNSET} if the duration should be parsed from the tkhd atom.
 * @param drmInitData {@link DrmInitData} to be included in the format.
 * @param ignoreEditLists Whether to ignore any edit lists in the trak box.
 * @param isQuickTime True for QuickTime media. False otherwise.
 * @return A {@link Track} instance, or {@code null} if the track's type isn't supported.
 */
public static Track parseTrak(Atom.ContainerAtom trak, Atom.LeafAtom mvhd, long duration,
    DrmInitData drmInitData, boolean ignoreEditLists, boolean isQuickTime)
    throws ParserException {
  Atom.ContainerAtom mdia = trak.getContainerAtomOfType(Atom.TYPE_mdia);
  int trackType = parseHdlr(mdia.getLeafAtomOfType(Atom.TYPE_hdlr).data);
  if (trackType == C.TRACK_TYPE_UNKNOWN) {
    return null;
  }

  TkhdData tkhdData = parseTkhd(trak.getLeafAtomOfType(Atom.TYPE_tkhd).data);
  if (duration == C.TIME_UNSET) {
    duration = tkhdData.duration;
  }
  long movieTimescale = parseMvhd(mvhd.data);
  long durationUs;
  if (duration == C.TIME_UNSET) {
    durationUs = C.TIME_UNSET;
  } else {
    durationUs = Util.scaleLargeTimestamp(duration, C.MICROS_PER_SECOND, movieTimescale);
  }
  Atom.ContainerAtom stbl = mdia.getContainerAtomOfType(Atom.TYPE_minf)
      .getContainerAtomOfType(Atom.TYPE_stbl);

  Pair<Long, String> mdhdData = parseMdhd(mdia.getLeafAtomOfType(Atom.TYPE_mdhd).data);
  StsdData stsdData = parseStsd(stbl.getLeafAtomOfType(Atom.TYPE_stsd).data, tkhdData.id,
      tkhdData.rotationDegrees, mdhdData.second, drmInitData, isQuickTime);
  long[] editListDurations = null;
  long[] editListMediaTimes = null;
  if (!ignoreEditLists) {
    Pair<long[], long[]> edtsData = parseEdts(trak.getContainerAtomOfType(Atom.TYPE_edts));
    editListDurations = edtsData.first;
    editListMediaTimes = edtsData.second;
  }
  return stsdData.format == null ? null
      : new Track(tkhdData.id, trackType, mdhdData.first, movieTimescale, durationUs,
          stsdData.format, stsdData.requiredSampleTransformation, stsdData.trackEncryptionBoxes,
          stsdData.nalUnitLengthFieldLength, editListDurations, editListMediaTimes);
}
 
Example 9
Source File: SonicAudioProcessor.java    From TelePlus-Android with GNU General Public License v2.0 5 votes vote down vote up
/**
 * Returns the specified duration scaled to take into account the speedup factor of this instance,
 * in the same units as {@code duration}.
 *
 * @param duration The duration to scale taking into account speedup.
 * @return The specified duration scaled to take into account speedup, in the same units as
 *     {@code duration}.
 */
public long scaleDurationForSpeedup(long duration) {
  if (outputBytes >= MIN_BYTES_FOR_SPEEDUP_CALCULATION) {
    return outputSampleRateHz == sampleRateHz
        ? Util.scaleLargeTimestamp(duration, inputBytes, outputBytes)
        : Util.scaleLargeTimestamp(duration, inputBytes * outputSampleRateHz,
            outputBytes * sampleRateHz);
  } else {
    return (long) ((double) speed * duration);
  }
}
 
Example 10
Source File: AtomParsers.java    From Telegram with GNU General Public License v2.0 5 votes vote down vote up
/**
 * Parses a trak atom (defined in 14496-12).
 *
 * @param trak Atom to decode.
 * @param mvhd Movie header atom, used to get the timescale.
 * @param duration The duration in units of the timescale declared in the mvhd atom, or
 *     {@link C#TIME_UNSET} if the duration should be parsed from the tkhd atom.
 * @param drmInitData {@link DrmInitData} to be included in the format.
 * @param ignoreEditLists Whether to ignore any edit lists in the trak box.
 * @param isQuickTime True for QuickTime media. False otherwise.
 * @return A {@link Track} instance, or {@code null} if the track's type isn't supported.
 */
public static Track parseTrak(Atom.ContainerAtom trak, Atom.LeafAtom mvhd, long duration,
    DrmInitData drmInitData, boolean ignoreEditLists, boolean isQuickTime)
    throws ParserException {
  Atom.ContainerAtom mdia = trak.getContainerAtomOfType(Atom.TYPE_mdia);
  int trackType = getTrackTypeForHdlr(parseHdlr(mdia.getLeafAtomOfType(Atom.TYPE_hdlr).data));
  if (trackType == C.TRACK_TYPE_UNKNOWN) {
    return null;
  }

  TkhdData tkhdData = parseTkhd(trak.getLeafAtomOfType(Atom.TYPE_tkhd).data);
  if (duration == C.TIME_UNSET) {
    duration = tkhdData.duration;
  }
  long movieTimescale = parseMvhd(mvhd.data);
  long durationUs;
  if (duration == C.TIME_UNSET) {
    durationUs = C.TIME_UNSET;
  } else {
    durationUs = Util.scaleLargeTimestamp(duration, C.MICROS_PER_SECOND, movieTimescale);
  }
  Atom.ContainerAtom stbl = mdia.getContainerAtomOfType(Atom.TYPE_minf)
      .getContainerAtomOfType(Atom.TYPE_stbl);

  Pair<Long, String> mdhdData = parseMdhd(mdia.getLeafAtomOfType(Atom.TYPE_mdhd).data);
  StsdData stsdData = parseStsd(stbl.getLeafAtomOfType(Atom.TYPE_stsd).data, tkhdData.id,
      tkhdData.rotationDegrees, mdhdData.second, drmInitData, isQuickTime);
  long[] editListDurations = null;
  long[] editListMediaTimes = null;
  if (!ignoreEditLists) {
    Pair<long[], long[]> edtsData = parseEdts(trak.getContainerAtomOfType(Atom.TYPE_edts));
    editListDurations = edtsData.first;
    editListMediaTimes = edtsData.second;
  }
  return stsdData.format == null ? null
      : new Track(tkhdData.id, trackType, mdhdData.first, movieTimescale, durationUs,
          stsdData.format, stsdData.requiredSampleTransformation, stsdData.trackEncryptionBoxes,
          stsdData.nalUnitLengthFieldLength, editListDurations, editListMediaTimes);
}
 
Example 11
Source File: XingSeeker.java    From K-Sonic with MIT License 5 votes vote down vote up
/**
 * Returns a {@link XingSeeker} for seeking in the stream, if required information is present.
 * Returns {@code null} if not. On returning, {@code frame}'s position is not specified so the
 * caller should reset it.
 *
 * @param mpegAudioHeader The MPEG audio header associated with the frame.
 * @param frame The data in this audio frame, with its position set to immediately after the
 *    'Xing' or 'Info' tag.
 * @param position The position (byte offset) of the start of this frame in the stream.
 * @param inputLength The length of the stream in bytes.
 * @return A {@link XingSeeker} for seeking in the stream, or {@code null} if the required
 *     information is not present.
 */
public static XingSeeker create(MpegAudioHeader mpegAudioHeader, ParsableByteArray frame,
    long position, long inputLength) {
  int samplesPerFrame = mpegAudioHeader.samplesPerFrame;
  int sampleRate = mpegAudioHeader.sampleRate;
  long firstFramePosition = position + mpegAudioHeader.frameSize;

  int flags = frame.readInt();
  int frameCount;
  if ((flags & 0x01) != 0x01 || (frameCount = frame.readUnsignedIntToInt()) == 0) {
    // If the frame count is missing/invalid, the header can't be used to determine the duration.
    return null;
  }
  long durationUs = Util.scaleLargeTimestamp(frameCount, samplesPerFrame * C.MICROS_PER_SECOND,
      sampleRate);
  if ((flags & 0x06) != 0x06) {
    // If the size in bytes or table of contents is missing, the stream is not seekable.
    return new XingSeeker(firstFramePosition, durationUs, inputLength);
  }

  long sizeBytes = frame.readUnsignedIntToInt();
  frame.skipBytes(1);
  long[] tableOfContents = new long[99];
  for (int i = 0; i < 99; i++) {
    tableOfContents[i] = frame.readUnsignedByte();
  }

  // TODO: Handle encoder delay and padding in 3 bytes offset by xingBase + 213 bytes:
  // delay = (frame.readUnsignedByte() << 4) + (frame.readUnsignedByte() >> 4);
  // padding = ((frame.readUnsignedByte() & 0x0F) << 8) + frame.readUnsignedByte();
  return new XingSeeker(firstFramePosition, durationUs, inputLength, tableOfContents,
      sizeBytes, mpegAudioHeader.frameSize);
}
 
Example 12
Source File: SegmentBase.java    From MediaSDK with Apache License 2.0 4 votes vote down vote up
/**
 * Returns the presentation time offset, in microseconds.
 */
public long getPresentationTimeOffsetUs() {
  return Util.scaleLargeTimestamp(presentationTimeOffset, C.MICROS_PER_SECOND, timescale);
}
 
Example 13
Source File: MatroskaExtractor.java    From TelePlus-Android with GNU General Public License v2.0 4 votes vote down vote up
private long scaleTimecodeToUs(long unscaledTimecode) throws ParserException {
  if (timecodeScale == C.TIME_UNSET) {
    throw new ParserException("Can't scale timecode prior to timecodeScale being set.");
  }
  return Util.scaleLargeTimestamp(unscaledTimecode, timecodeScale, 1000);
}
 
Example 14
Source File: VbriSeeker.java    From MediaSDK with Apache License 2.0 4 votes vote down vote up
/**
 * Returns a {@link VbriSeeker} for seeking in the stream, if required information is present.
 * Returns {@code null} if not. On returning, {@code frame}'s position is not specified so the
 * caller should reset it.
 *
 * @param inputLength The length of the stream in bytes, or {@link C#LENGTH_UNSET} if unknown.
 * @param position The position of the start of this frame in the stream.
 * @param mpegAudioHeader The MPEG audio header associated with the frame.
 * @param frame The data in this audio frame, with its position set to immediately after the
 *     'VBRI' tag.
 * @return A {@link VbriSeeker} for seeking in the stream, or {@code null} if the required
 *     information is not present.
 */
public static @Nullable VbriSeeker create(
    long inputLength, long position, MpegAudioHeader mpegAudioHeader, ParsableByteArray frame) {
  frame.skipBytes(10);
  int numFrames = frame.readInt();
  if (numFrames <= 0) {
    return null;
  }
  int sampleRate = mpegAudioHeader.sampleRate;
  long durationUs = Util.scaleLargeTimestamp(numFrames,
      C.MICROS_PER_SECOND * (sampleRate >= 32000 ? 1152 : 576), sampleRate);
  int entryCount = frame.readUnsignedShort();
  int scale = frame.readUnsignedShort();
  int entrySize = frame.readUnsignedShort();
  frame.skipBytes(2);

  long minPosition = position + mpegAudioHeader.frameSize;
  // Read table of contents entries.
  long[] timesUs = new long[entryCount];
  long[] positions = new long[entryCount];
  for (int index = 0; index < entryCount; index++) {
    timesUs[index] = (index * durationUs) / entryCount;
    // Ensure positions do not fall within the frame containing the VBRI header. This constraint
    // will normally only apply to the first entry in the table.
    positions[index] = Math.max(position, minPosition);
    int segmentSize;
    switch (entrySize) {
      case 1:
        segmentSize = frame.readUnsignedByte();
        break;
      case 2:
        segmentSize = frame.readUnsignedShort();
        break;
      case 3:
        segmentSize = frame.readUnsignedInt24();
        break;
      case 4:
        segmentSize = frame.readUnsignedIntToInt();
        break;
      default:
        return null;
    }
    position += segmentSize * scale;
  }
  if (inputLength != C.LENGTH_UNSET && inputLength != position) {
    Log.w(TAG, "VBRI data size mismatch: " + inputLength + ", " + position);
  }
  return new VbriSeeker(timesUs, positions, durationUs, /* dataEndPosition= */ position);
}
 
Example 15
Source File: SegmentBase.java    From Telegram-FOSS with GNU General Public License v2.0 4 votes vote down vote up
/**
 * Returns the presentation time offset, in microseconds.
 */
public long getPresentationTimeOffsetUs() {
  return Util.scaleLargeTimestamp(presentationTimeOffset, C.MICROS_PER_SECOND, timescale);
}
 
Example 16
Source File: VbriSeeker.java    From TelePlus-Android with GNU General Public License v2.0 4 votes vote down vote up
/**
 * Returns a {@link VbriSeeker} for seeking in the stream, if required information is present.
 * Returns {@code null} if not. On returning, {@code frame}'s position is not specified so the
 * caller should reset it.
 *
 * @param inputLength The length of the stream in bytes, or {@link C#LENGTH_UNSET} if unknown.
 * @param position The position of the start of this frame in the stream.
 * @param mpegAudioHeader The MPEG audio header associated with the frame.
 * @param frame The data in this audio frame, with its position set to immediately after the
 *     'VBRI' tag.
 * @return A {@link VbriSeeker} for seeking in the stream, or {@code null} if the required
 *     information is not present.
 */
public static VbriSeeker create(long inputLength, long position, MpegAudioHeader mpegAudioHeader,
    ParsableByteArray frame) {
  frame.skipBytes(10);
  int numFrames = frame.readInt();
  if (numFrames <= 0) {
    return null;
  }
  int sampleRate = mpegAudioHeader.sampleRate;
  long durationUs = Util.scaleLargeTimestamp(numFrames,
      C.MICROS_PER_SECOND * (sampleRate >= 32000 ? 1152 : 576), sampleRate);
  int entryCount = frame.readUnsignedShort();
  int scale = frame.readUnsignedShort();
  int entrySize = frame.readUnsignedShort();
  frame.skipBytes(2);

  long minPosition = position + mpegAudioHeader.frameSize;
  // Read table of contents entries.
  long[] timesUs = new long[entryCount];
  long[] positions = new long[entryCount];
  for (int index = 0; index < entryCount; index++) {
    timesUs[index] = (index * durationUs) / entryCount;
    // Ensure positions do not fall within the frame containing the VBRI header. This constraint
    // will normally only apply to the first entry in the table.
    positions[index] = Math.max(position, minPosition);
    int segmentSize;
    switch (entrySize) {
      case 1:
        segmentSize = frame.readUnsignedByte();
        break;
      case 2:
        segmentSize = frame.readUnsignedShort();
        break;
      case 3:
        segmentSize = frame.readUnsignedInt24();
        break;
      case 4:
        segmentSize = frame.readUnsignedIntToInt();
        break;
      default:
        return null;
    }
    position += segmentSize * scale;
  }
  if (inputLength != C.LENGTH_UNSET && inputLength != position) {
    Log.w(TAG, "VBRI data size mismatch: " + inputLength + ", " + position);
  }
  return new VbriSeeker(timesUs, positions, durationUs);
}
 
Example 17
Source File: FragmentedMp4Extractor.java    From TelePlus-Android with GNU General Public License v2.0 4 votes vote down vote up
/**
 * Parses a sidx atom (defined in 14496-12).
 *
 * @param atom The atom data.
 * @param inputPosition The input position of the first byte after the atom.
 * @return A pair consisting of the earliest presentation time in microseconds, and the parsed
 *     {@link ChunkIndex}.
 */
private static Pair<Long, ChunkIndex> parseSidx(ParsableByteArray atom, long inputPosition)
    throws ParserException {
  atom.setPosition(Atom.HEADER_SIZE);
  int fullAtom = atom.readInt();
  int version = Atom.parseFullAtomVersion(fullAtom);

  atom.skipBytes(4);
  long timescale = atom.readUnsignedInt();
  long earliestPresentationTime;
  long offset = inputPosition;
  if (version == 0) {
    earliestPresentationTime = atom.readUnsignedInt();
    offset += atom.readUnsignedInt();
  } else {
    earliestPresentationTime = atom.readUnsignedLongToLong();
    offset += atom.readUnsignedLongToLong();
  }
  long earliestPresentationTimeUs = Util.scaleLargeTimestamp(earliestPresentationTime,
      C.MICROS_PER_SECOND, timescale);

  atom.skipBytes(2);

  int referenceCount = atom.readUnsignedShort();
  int[] sizes = new int[referenceCount];
  long[] offsets = new long[referenceCount];
  long[] durationsUs = new long[referenceCount];
  long[] timesUs = new long[referenceCount];

  long time = earliestPresentationTime;
  long timeUs = earliestPresentationTimeUs;
  for (int i = 0; i < referenceCount; i++) {
    int firstInt = atom.readInt();

    int type = 0x80000000 & firstInt;
    if (type != 0) {
      throw new ParserException("Unhandled indirect reference");
    }
    long referenceDuration = atom.readUnsignedInt();

    sizes[i] = 0x7FFFFFFF & firstInt;
    offsets[i] = offset;

    // Calculate time and duration values such that any rounding errors are consistent. i.e. That
    // timesUs[i] + durationsUs[i] == timesUs[i + 1].
    timesUs[i] = timeUs;
    time += referenceDuration;
    timeUs = Util.scaleLargeTimestamp(time, C.MICROS_PER_SECOND, timescale);
    durationsUs[i] = timeUs - timesUs[i];

    atom.skipBytes(4);
    offset += sizes[i];
  }

  return Pair.create(earliestPresentationTimeUs,
      new ChunkIndex(sizes, offsets, durationsUs, timesUs));
}
 
Example 18
Source File: MatroskaExtractor.java    From Telegram with GNU General Public License v2.0 4 votes vote down vote up
private long scaleTimecodeToUs(long unscaledTimecode) throws ParserException {
  if (timecodeScale == C.TIME_UNSET) {
    throw new ParserException("Can't scale timecode prior to timecodeScale being set.");
  }
  return Util.scaleLargeTimestamp(unscaledTimecode, timecodeScale, 1000);
}
 
Example 19
Source File: VbriSeeker.java    From Telegram with GNU General Public License v2.0 4 votes vote down vote up
/**
 * Returns a {@link VbriSeeker} for seeking in the stream, if required information is present.
 * Returns {@code null} if not. On returning, {@code frame}'s position is not specified so the
 * caller should reset it.
 *
 * @param inputLength The length of the stream in bytes, or {@link C#LENGTH_UNSET} if unknown.
 * @param position The position of the start of this frame in the stream.
 * @param mpegAudioHeader The MPEG audio header associated with the frame.
 * @param frame The data in this audio frame, with its position set to immediately after the
 *     'VBRI' tag.
 * @return A {@link VbriSeeker} for seeking in the stream, or {@code null} if the required
 *     information is not present.
 */
public static @Nullable VbriSeeker create(
    long inputLength, long position, MpegAudioHeader mpegAudioHeader, ParsableByteArray frame) {
  frame.skipBytes(10);
  int numFrames = frame.readInt();
  if (numFrames <= 0) {
    return null;
  }
  int sampleRate = mpegAudioHeader.sampleRate;
  long durationUs = Util.scaleLargeTimestamp(numFrames,
      C.MICROS_PER_SECOND * (sampleRate >= 32000 ? 1152 : 576), sampleRate);
  int entryCount = frame.readUnsignedShort();
  int scale = frame.readUnsignedShort();
  int entrySize = frame.readUnsignedShort();
  frame.skipBytes(2);

  long minPosition = position + mpegAudioHeader.frameSize;
  // Read table of contents entries.
  long[] timesUs = new long[entryCount];
  long[] positions = new long[entryCount];
  for (int index = 0; index < entryCount; index++) {
    timesUs[index] = (index * durationUs) / entryCount;
    // Ensure positions do not fall within the frame containing the VBRI header. This constraint
    // will normally only apply to the first entry in the table.
    positions[index] = Math.max(position, minPosition);
    int segmentSize;
    switch (entrySize) {
      case 1:
        segmentSize = frame.readUnsignedByte();
        break;
      case 2:
        segmentSize = frame.readUnsignedShort();
        break;
      case 3:
        segmentSize = frame.readUnsignedInt24();
        break;
      case 4:
        segmentSize = frame.readUnsignedIntToInt();
        break;
      default:
        return null;
    }
    position += segmentSize * scale;
  }
  if (inputLength != C.LENGTH_UNSET && inputLength != position) {
    Log.w(TAG, "VBRI data size mismatch: " + inputLength + ", " + position);
  }
  return new VbriSeeker(timesUs, positions, durationUs, /* dataEndPosition= */ position);
}
 
Example 20
Source File: MatroskaExtractor.java    From Telegram-FOSS with GNU General Public License v2.0 4 votes vote down vote up
private long scaleTimecodeToUs(long unscaledTimecode) throws ParserException {
  if (timecodeScale == C.TIME_UNSET) {
    throw new ParserException("Can't scale timecode prior to timecodeScale being set.");
  }
  return Util.scaleLargeTimestamp(unscaledTimecode, timecodeScale, 1000);
}