com.mongodb.client.model.WriteModel Java Examples

The following examples show how to use com.mongodb.client.model.WriteModel. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example #1
Source File: ReplaceOneBusinessKeyStrategy.java    From kafka-connect-mongodb with Apache License 2.0 8 votes vote down vote up
@Override
public WriteModel<BsonDocument> createWriteModel(SinkDocument document) {

    BsonDocument vd = document.getValueDoc().orElseThrow(
            () -> new DataException("error: cannot build the WriteModel since"
                    + " the value document was missing unexpectedly")
    );

    BsonValue businessKey = vd.get(DBCollection.ID_FIELD_NAME);

    if(businessKey == null || !(businessKey instanceof BsonDocument)) {
        throw new DataException("error: cannot build the WriteModel since"
                + " the value document does not contain an _id field of type BsonDocument"
                + " which holds the business key fields");
    }

    vd.remove(DBCollection.ID_FIELD_NAME);

    return new ReplaceOneModel<>((BsonDocument)businessKey, vd, UPDATE_OPTIONS);

}
 
Example #2
Source File: MongoSearchUpdaterFlow.java    From ditto with Eclipse Public License 2.0 6 votes vote down vote up
private Source<WriteResultAndErrors, NotUsed> executeBulkWrite(
        final List<AbstractWriteModel> abstractWriteModels) {
    final List<WriteModel<Document>> writeModels = abstractWriteModels.stream()
            .map(AbstractWriteModel::toMongo)
            .collect(Collectors.toList());
    return Source.fromPublisher(collection.bulkWrite(writeModels, new BulkWriteOptions().ordered(false)))
            .map(bulkWriteResult -> WriteResultAndErrors.success(abstractWriteModels, bulkWriteResult))
            .recoverWithRetries(1, new PFBuilder<Throwable, Source<WriteResultAndErrors, NotUsed>>()
                    .match(MongoBulkWriteException.class, bulkWriteException ->
                            Source.single(WriteResultAndErrors.failure(abstractWriteModels, bulkWriteException))
                    )
                    .matchAny(error ->
                            Source.single(WriteResultAndErrors.unexpectedError(abstractWriteModels, error))
                    )
                    .build()
            );
}
 
Example #3
Source File: UpdateOneTimestampsStrategy.java    From mongo-kafka with Apache License 2.0 6 votes vote down vote up
@Override
public WriteModel<BsonDocument> createWriteModel(final SinkDocument document) {
  BsonDocument vd =
      document
          .getValueDoc()
          .orElseThrow(
              () ->
                  new DataException(
                      "Error: cannot build the WriteModel since the value document was missing unexpectedly"));

  BsonDateTime dateTime = new BsonDateTime(Instant.now().toEpochMilli());

  return new UpdateOneModel<>(
      new BsonDocument(ID_FIELD, vd.get(ID_FIELD)),
      new BsonDocument("$set", vd.append(FIELD_NAME_MODIFIED_TS, dateTime))
          .append("$setOnInsert", new BsonDocument(FIELD_NAME_INSERTED_TS, dateTime)),
      UPDATE_OPTIONS);
}
 
Example #4
Source File: MongoDbHandler.java    From kafka-connect-mongodb with Apache License 2.0 6 votes vote down vote up
@Override
public Optional<WriteModel<BsonDocument>> handle(SinkDocument doc) {

    BsonDocument keyDoc = doc.getKeyDoc().orElseThrow(
            () -> new DataException("error: key document must not be missing for CDC mode")
    );

    BsonDocument valueDoc = doc.getValueDoc()
                                .orElseGet(BsonDocument::new);

    if(keyDoc.containsKey(JSON_ID_FIELD_PATH)
            && valueDoc.isEmpty()) {
        logger.debug("skipping debezium tombstone event for kafka topic compaction");
        return Optional.empty();
    }

    logger.debug("key: "+keyDoc.toString());
    logger.debug("value: "+valueDoc.toString());

    return Optional.ofNullable(getCdcOperation(valueDoc).perform(doc));
}
 
Example #5
Source File: MongoDbDeleteTest.java    From kafka-connect-mongodb with Apache License 2.0 6 votes vote down vote up
@Test
@DisplayName("when valid cdc event then correct DeleteOneModel")
public void testValidSinkDocument() {
    BsonDocument keyDoc = new BsonDocument("id",new BsonString("1004"));

    WriteModel<BsonDocument> result =
            MONGODB_DELETE.perform(new SinkDocument(keyDoc,null));

    assertTrue(result instanceof DeleteOneModel,
            () -> "result expected to be of type DeleteOneModel");

    DeleteOneModel<BsonDocument> writeModel =
            (DeleteOneModel<BsonDocument>) result;

    assertTrue(writeModel.getFilter() instanceof BsonDocument,
            () -> "filter expected to be of type BsonDocument");

    assertEquals(FILTER_DOC,writeModel.getFilter());

}
 
Example #6
Source File: UpdateOneTimestampsStrategy.java    From kafka-connect-mongodb with Apache License 2.0 6 votes vote down vote up
@Override
public WriteModel<BsonDocument> createWriteModel(SinkDocument document) {

    BsonDocument vd = document.getValueDoc().orElseThrow(
            () -> new DataException("error: cannot build the WriteModel since"
                    + " the value document was missing unexpectedly")
    );

    BsonDateTime dateTime = new BsonDateTime(Instant.now().toEpochMilli());

    return new UpdateOneModel<>(
            new BsonDocument(DBCollection.ID_FIELD_NAME,
                    vd.get(DBCollection.ID_FIELD_NAME)),
            new BsonDocument("$set", vd.append(FIELDNAME_MODIFIED_TS, dateTime))
                    .append("$setOnInsert", new BsonDocument(FIELDNAME_INSERTED_TS, dateTime)),
            UPDATE_OPTIONS
    );

}
 
Example #7
Source File: MongoDbHandler.java    From mongo-kafka with Apache License 2.0 6 votes vote down vote up
@Override
public Optional<WriteModel<BsonDocument>> handle(final SinkDocument doc) {

  BsonDocument keyDoc =
      doc.getKeyDoc()
          .orElseThrow(
              () -> new DataException("Error: key document must not be missing for CDC mode"));

  BsonDocument valueDoc = doc.getValueDoc().orElseGet(BsonDocument::new);

  if (keyDoc.containsKey(JSON_ID_FIELD) && valueDoc.isEmpty()) {
    LOGGER.debug("skipping debezium tombstone event for kafka topic compaction");
    return Optional.empty();
  }

  LOGGER.debug("key: " + keyDoc.toString());
  LOGGER.debug("value: " + valueDoc.toString());

  return Optional.of(getCdcOperation(valueDoc).perform(doc));
}
 
Example #8
Source File: MongoDbInsert.java    From mongo-kafka with Apache License 2.0 6 votes vote down vote up
@Override
public WriteModel<BsonDocument> perform(final SinkDocument doc) {

  BsonDocument valueDoc =
      doc.getValueDoc()
          .orElseThrow(
              () ->
                  new DataException("Error: value doc must not be missing for insert operation"));

  try {
    BsonDocument insertDoc =
        BsonDocument.parse(valueDoc.get(JSON_DOC_FIELD_PATH).asString().getValue());
    return new ReplaceOneModel<>(
        new BsonDocument(ID_FIELD, insertDoc.get(ID_FIELD)), insertDoc, REPLACE_OPTIONS);
  } catch (Exception exc) {
    throw new DataException(exc);
  }
}
 
Example #9
Source File: RdbmsUpdate.java    From mongo-kafka with Apache License 2.0 6 votes vote down vote up
@Override
public WriteModel<BsonDocument> perform(final SinkDocument doc) {

  BsonDocument keyDoc =
      doc.getKeyDoc()
          .orElseThrow(
              () -> new DataException("Error: key doc must not be missing for update operation"));

  BsonDocument valueDoc =
      doc.getValueDoc()
          .orElseThrow(
              () ->
                  new DataException("Error: value doc must not be missing for update operation"));

  try {
    BsonDocument filterDoc =
        RdbmsHandler.generateFilterDoc(keyDoc, valueDoc, OperationType.UPDATE);
    BsonDocument replaceDoc =
        RdbmsHandler.generateUpsertOrReplaceDoc(keyDoc, valueDoc, filterDoc);
    return new ReplaceOneModel<>(filterDoc, replaceDoc, REPLACE_OPTIONS);
  } catch (Exception exc) {
    throw new DataException(exc);
  }
}
 
Example #10
Source File: RdbmsInsert.java    From mongo-kafka with Apache License 2.0 6 votes vote down vote up
@Override
public WriteModel<BsonDocument> perform(final SinkDocument doc) {

  BsonDocument keyDoc =
      doc.getKeyDoc()
          .orElseThrow(
              () -> new DataException("Error: key doc must not be missing for insert operation"));

  BsonDocument valueDoc =
      doc.getValueDoc()
          .orElseThrow(
              () ->
                  new DataException("Error: value doc must not be missing for insert operation"));

  try {
    BsonDocument filterDoc =
        RdbmsHandler.generateFilterDoc(keyDoc, valueDoc, OperationType.CREATE);
    BsonDocument upsertDoc = RdbmsHandler.generateUpsertOrReplaceDoc(keyDoc, valueDoc, filterDoc);
    return new ReplaceOneModel<>(filterDoc, upsertDoc, REPLACE_OPTIONS);
  } catch (Exception exc) {
    throw new DataException(exc);
  }
}
 
Example #11
Source File: ReplaceOneBusinessKeyStrategy.java    From mongo-kafka with Apache License 2.0 6 votes vote down vote up
@Override
public WriteModel<BsonDocument> createWriteModel(final SinkDocument document) {
  BsonDocument vd =
      document
          .getValueDoc()
          .orElseThrow(
              () ->
                  new DataException(
                      "Error: cannot build the WriteModel since the value document was missing unexpectedly"));

  try {
    BsonDocument businessKey = vd.getDocument(ID_FIELD);
    vd.remove(ID_FIELD);
    return new ReplaceOneModel<>(businessKey, vd, REPLACE_OPTIONS);
  } catch (BSONException e) {
    throw new DataException(
        "Error: cannot build the WriteModel since the value document does not contain an _id field of"
            + " type BsonDocument which holds the business key fields");
  }
}
 
Example #12
Source File: MongoDbDelete.java    From kafka-connect-mongodb with Apache License 2.0 6 votes vote down vote up
@Override
public WriteModel<BsonDocument> perform(SinkDocument doc) {

    BsonDocument keyDoc = doc.getKeyDoc().orElseThrow(
            () -> new DataException("error: key doc must not be missing for delete operation")
    );

    try {
        BsonDocument filterDoc = BsonDocument.parse(
                "{"+DBCollection.ID_FIELD_NAME+
                    ":"+keyDoc.getString(MongoDbHandler.JSON_ID_FIELD_PATH)
                            .getValue()+"}"
        );
        return new DeleteOneModel<>(filterDoc);
    } catch(Exception exc) {
        throw new DataException(exc);
    }

}
 
Example #13
Source File: RdbmsInsertTest.java    From mongo-kafka with Apache License 2.0 6 votes vote down vote up
@Test
@DisplayName("when valid cdc event with compound PK then correct ReplaceOneModel")
void testValidSinkDocumentCompoundPK() {
  BsonDocument filterDoc = BsonDocument.parse("{_id: {idA: 123, idB: 'ABC'}}");
  BsonDocument replacementDoc = BsonDocument.parse("{_id: {idA: 123, idB: 'ABC'}, active: true}");
  BsonDocument keyDoc = BsonDocument.parse("{idA: 123, idB: 'ABC'}");
  BsonDocument valueDoc =
      BsonDocument.parse("{op: 'c', after: {_id: {idA: 123, idB: 'ABC'}, active: true}}");

  WriteModel<BsonDocument> result = RDBMS_INSERT.perform(new SinkDocument(keyDoc, valueDoc));
  assertTrue(result instanceof ReplaceOneModel, "result expected to be of type ReplaceOneModel");

  ReplaceOneModel<BsonDocument> writeModel = (ReplaceOneModel<BsonDocument>) result;
  assertEquals(
      replacementDoc,
      writeModel.getReplacement(),
      "replacement doc not matching what is expected");
  assertTrue(
      writeModel.getFilter() instanceof BsonDocument,
      "filter expected to be of type BsonDocument");
  assertEquals(filterDoc, writeModel.getFilter());
  assertTrue(
      writeModel.getReplaceOptions().isUpsert(),
      "replacement expected to be done in upsert mode");
}
 
Example #14
Source File: RdbmsDeleteTest.java    From mongo-kafka with Apache License 2.0 6 votes vote down vote up
@Test
@DisplayName("when valid cdc event with single field PK then correct DeleteOneModel")
void testValidSinkDocumentSingleFieldPK() {
  BsonDocument filterDoc = BsonDocument.parse("{_id: {id: 1004}}");
  BsonDocument keyDoc = BsonDocument.parse("{id: 1004}");
  BsonDocument valueDoc = BsonDocument.parse("{op: 'd'}");

  WriteModel<BsonDocument> result = RDBMS_DELETE.perform(new SinkDocument(keyDoc, valueDoc));
  assertTrue(result instanceof DeleteOneModel, "result expected to be of type DeleteOneModel");

  DeleteOneModel<BsonDocument> writeModel = (DeleteOneModel<BsonDocument>) result;
  assertTrue(
      writeModel.getFilter() instanceof BsonDocument,
      "filter expected to be of type BsonDocument");
  assertEquals(filterDoc, writeModel.getFilter());
}
 
Example #15
Source File: RdbmsDeleteTest.java    From mongo-kafka with Apache License 2.0 6 votes vote down vote up
@Test
@DisplayName("when valid cdc event with compound PK then correct DeleteOneModel")
void testValidSinkDocumentCompoundPK() {
  BsonDocument filterDoc = BsonDocument.parse("{_id: {idA: 123, idB: 'ABC'}}");
  BsonDocument keyDoc = BsonDocument.parse("{idA: 123, idB: 'ABC'}");
  BsonDocument valueDoc = BsonDocument.parse("{op: 'd'}");

  WriteModel<BsonDocument> result = RDBMS_DELETE.perform(new SinkDocument(keyDoc, valueDoc));
  assertTrue(result instanceof DeleteOneModel, "result expected to be of type DeleteOneModel");

  DeleteOneModel<BsonDocument> writeModel = (DeleteOneModel<BsonDocument>) result;
  assertTrue(
      writeModel.getFilter() instanceof BsonDocument,
      "filter expected to be of type BsonDocument");
  assertEquals(filterDoc, writeModel.getFilter());
}
 
Example #16
Source File: RdbmsUpdate.java    From kafka-connect-mongodb with Apache License 2.0 6 votes vote down vote up
@Override
public WriteModel<BsonDocument> perform(SinkDocument doc) {

    BsonDocument keyDoc = doc.getKeyDoc().orElseThrow(
            () -> new DataException("error: key doc must not be missing for update operation")
    );

    BsonDocument valueDoc = doc.getValueDoc().orElseThrow(
            () -> new DataException("error: value doc must not be missing for update operation")
    );

    try {
        BsonDocument filterDoc = RdbmsHandler.generateFilterDoc(keyDoc, valueDoc, OperationType.UPDATE);
        BsonDocument replaceDoc = RdbmsHandler.generateUpsertOrReplaceDoc(keyDoc, valueDoc, filterDoc);
        return new ReplaceOneModel<>(filterDoc, replaceDoc, UPDATE_OPTIONS);
    } catch (Exception exc) {
        throw new DataException(exc);
    }

}
 
Example #17
Source File: MongoDbUpdateTest.java    From mongo-kafka with Apache License 2.0 6 votes vote down vote up
@Test
@DisplayName("when valid doc change cdc event then correct UpdateOneModel")
void testValidSinkDocumentForUpdate() {
  BsonDocument keyDoc = BsonDocument.parse("{id: '1234'}");
  BsonDocument valueDoc =
      new BsonDocument("op", new BsonString("u"))
          .append("patch", new BsonString(UPDATE_DOC.toJson()));

  WriteModel<BsonDocument> result = UPDATE.perform(new SinkDocument(keyDoc, valueDoc));
  assertTrue(result instanceof UpdateOneModel, "result expected to be of type UpdateOneModel");

  UpdateOneModel<BsonDocument> writeModel = (UpdateOneModel<BsonDocument>) result;
  assertEquals(UPDATE_DOC, writeModel.getUpdate(), "update doc not matching what is expected");
  assertTrue(
      writeModel.getFilter() instanceof BsonDocument,
      "filter expected to be of type BsonDocument");
  assertEquals(FILTER_DOC, writeModel.getFilter());
}
 
Example #18
Source File: MongoDbUpdateTest.java    From mongo-kafka with Apache License 2.0 6 votes vote down vote up
@Test
@DisplayName(
    "when valid doc change cdc event containing internal oplog fields then correct UpdateOneModel")
public void testValidSinkDocumentWithInternalOploagFieldForUpdate() {
  BsonDocument keyDoc = BsonDocument.parse("{id: '1234'}");
  BsonDocument valueDoc =
      new BsonDocument("op", new BsonString("u"))
          .append("patch", new BsonString(UPDATE_DOC_WITH_OPLOG_INTERNALS.toJson()));

  WriteModel<BsonDocument> result = UPDATE.perform(new SinkDocument(keyDoc, valueDoc));
  assertTrue(
      result instanceof UpdateOneModel, () -> "result expected to be of type UpdateOneModel");

  UpdateOneModel<BsonDocument> writeModel = (UpdateOneModel<BsonDocument>) result;
  assertEquals(
      UPDATE_DOC, writeModel.getUpdate(), () -> "update doc not matching what is expected");
  assertTrue(
      writeModel.getFilter() instanceof BsonDocument,
      () -> "filter expected to be of type BsonDocument");
  assertEquals(FILTER_DOC, writeModel.getFilter());
}
 
Example #19
Source File: MongoDbInsertTest.java    From mongo-kafka with Apache License 2.0 6 votes vote down vote up
@Test
@DisplayName("when valid cdc event then correct ReplaceOneModel")
void testValidSinkDocument() {
  BsonDocument keyDoc = new BsonDocument("id", new BsonString("1234"));
  BsonDocument valueDoc =
      new BsonDocument("op", new BsonString("c"))
          .append("after", new BsonString(REPLACEMENT_DOC.toJson()));

  WriteModel<BsonDocument> result = INSERT.perform(new SinkDocument(keyDoc, valueDoc));

  assertTrue(result instanceof ReplaceOneModel, "result expected to be of type ReplaceOneModel");

  ReplaceOneModel<BsonDocument> writeModel = (ReplaceOneModel<BsonDocument>) result;

  assertEquals(
      REPLACEMENT_DOC,
      writeModel.getReplacement(),
      "replacement doc not matching what is expected");
  assertTrue(
      writeModel.getFilter() instanceof BsonDocument,
      "filter expected to be of type BsonDocument");
  assertEquals(FILTER_DOC, writeModel.getFilter());
  assertTrue(
      writeModel.getReplaceOptions().isUpsert(),
      "replacement expected to be done in upsert mode");
}
 
Example #20
Source File: WriteModelStrategyTest.java    From mongo-kafka with Apache License 2.0 6 votes vote down vote up
@Test
@DisplayName(
    "when sink document is valid for DeleteOneDefaultStrategy then correct DeleteOneModel")
void testDeleteOneDefaultStrategyWitValidSinkDocument() {

  BsonDocument keyDoc = BsonDocument.parse("{id: 1234}");

  WriteModel<BsonDocument> result =
      DELETE_ONE_DEFAULT_STRATEGY.createWriteModel(new SinkDocument(keyDoc, null));

  assertTrue(result instanceof DeleteOneModel, "result expected to be of type DeleteOneModel");

  DeleteOneModel<BsonDocument> writeModel = (DeleteOneModel<BsonDocument>) result;

  assertTrue(
      writeModel.getFilter() instanceof BsonDocument,
      "filter expected to be of type BsonDocument");

  assertEquals(FILTER_DOC_DELETE_DEFAULT, writeModel.getFilter());
}
 
Example #21
Source File: WriteModelStrategyTest.java    From mongo-kafka with Apache License 2.0 6 votes vote down vote up
@Test
@DisplayName(
    "when sink document is valid for ReplaceOneDefaultStrategy then correct ReplaceOneModel")
void testReplaceOneDefaultStrategyWithValidSinkDocument() {
  BsonDocument valueDoc =
      BsonDocument.parse("{_id: 1234, first_name: 'Grace', last_name: 'Hopper'}");

  WriteModel<BsonDocument> result =
      REPLACE_ONE_DEFAULT_STRATEGY.createWriteModel(new SinkDocument(null, valueDoc));
  assertTrue(result instanceof ReplaceOneModel, "result expected to be of type ReplaceOneModel");

  ReplaceOneModel<BsonDocument> writeModel = (ReplaceOneModel<BsonDocument>) result;

  assertEquals(
      REPLACEMENT_DOC_DEFAULT,
      writeModel.getReplacement(),
      "replacement doc not matching what is expected");
  assertTrue(
      writeModel.getFilter() instanceof BsonDocument,
      "filter expected to be of type BsonDocument");
  assertEquals(FILTER_DOC_REPLACE_DEFAULT, writeModel.getFilter());
  assertTrue(
      writeModel.getReplaceOptions().isUpsert(),
      "replacement expected to be done in upsert mode");
}
 
Example #22
Source File: MongoOperations.java    From quarkus with Apache License 2.0 6 votes vote down vote up
private static void persistOrUpdate(MongoCollection collection, List<Object> entities) {
    //this will be an ordered bulk: it's less performant than a unordered one but will fail at the first failed write
    List<WriteModel> bulk = new ArrayList<>();
    for (Object entity : entities) {
        //we transform the entity as a document first
        BsonDocument document = getBsonDocument(collection, entity);

        //then we get its id field and create a new Document with only this one that will be our replace query
        BsonValue id = document.get(ID);
        if (id == null) {
            //insert with autogenerated ID
            bulk.add(new InsertOneModel(entity));
        } else {
            //insert with user provided ID or update
            BsonDocument query = new BsonDocument().append(ID, id);
            bulk.add(new ReplaceOneModel(query, entity,
                    new ReplaceOptions().upsert(true)));
        }
    }

    collection.bulkWrite(bulk);
}
 
Example #23
Source File: MongoCollectionWriteModelContainer.java    From stitch-android-sdk with Apache License 2.0 6 votes vote down vote up
@Override
boolean commit() {
  final MongoCollection<DocumentT> collection = getCollection();
  final List<WriteModel<DocumentT>> writes = getBulkWriteModels();

  if (collection == null) {
    throw new IllegalStateException("cannot commit a container with no associated collection");
  }

  boolean success = true;

  if (writes.size() > 0) {
    final BulkWriteResult result = collection.bulkWrite(writes);
    success = result.wasAcknowledged();
  }

  return success;
}
 
Example #24
Source File: MongoDbUpdateTest.java    From kafka-connect-mongodb with Apache License 2.0 6 votes vote down vote up
@Test
@DisplayName("when valid doc change cdc event containing internal oplog fields then correct UpdateOneModel")
public void testValidSinkDocumentWithInternalOploagFieldForUpdate() {
    BsonDocument keyDoc = new BsonDocument("id",new BsonString("1004"));

    BsonDocument valueDoc = new BsonDocument("op",new BsonString("u"))
            .append("patch",new BsonString(UPDATE_DOC_WITH_OPLOG_INTERNALS.toJson()));

    WriteModel<BsonDocument> result =
            MONGODB_UPDATE.perform(new SinkDocument(keyDoc,valueDoc));

    assertTrue(result instanceof UpdateOneModel,
            () -> "result expected to be of type UpdateOneModel");

    UpdateOneModel<BsonDocument> writeModel =
            (UpdateOneModel<BsonDocument>) result;

    assertEquals(UPDATE_DOC,writeModel.getUpdate(),
            ()-> "update doc not matching what is expected");

    assertTrue(writeModel.getFilter() instanceof BsonDocument,
            () -> "filter expected to be of type BsonDocument");

    assertEquals(FILTER_DOC,writeModel.getFilter());

}
 
Example #25
Source File: MongoOpsUtil.java    From ditto with Eclipse Public License 2.0 6 votes vote down vote up
private static Source<Optional<Throwable>, NotUsed> doDeleteByFilter(final MongoCollection<Document> collection,
        final Bson filter) {
    // https://stackoverflow.com/a/33164008
    // claims unordered bulk ops halve MongoDB load
    final List<WriteModel<Document>> writeModel =
            Collections.singletonList(new DeleteManyModel<>(filter));
    final BulkWriteOptions options = new BulkWriteOptions().ordered(false);
    return Source.fromPublisher(collection.bulkWrite(writeModel, options))
            .map(result -> {
                if (LOGGER.isDebugEnabled()) {
                    // in contrast to Bson, BsonDocument has meaningful toString()
                    final BsonDocument filterBsonDoc = BsonUtil.toBsonDocument(filter);
                    LOGGER.debug("Deleted <{}> documents from collection <{}>. Filter was <{}>.",
                            result.getDeletedCount(), collection.getNamespace(), filterBsonDoc);
                }
                return Optional.<Throwable>empty();
            })
            .recoverWithRetries(RETRY_ATTEMPTS, new PFBuilder<Throwable, Source<Optional<Throwable>, NotUsed>>()
                    .matchAny(throwable -> Source.single(Optional.of(throwable)))
                    .build());
}
 
Example #26
Source File: RdbmsDelete.java    From kafka-connect-mongodb with Apache License 2.0 6 votes vote down vote up
@Override
public WriteModel<BsonDocument> perform(SinkDocument doc) {

    BsonDocument keyDoc = doc.getKeyDoc().orElseThrow(
            () -> new DataException("error: key doc must not be missing for delete operation")
    );

    BsonDocument valueDoc = doc.getValueDoc().orElseThrow(
            () -> new DataException("error: value doc must not be missing for delete operation")
    );

    try {
        BsonDocument filterDoc = RdbmsHandler.generateFilterDoc(keyDoc, valueDoc, OperationType.DELETE);
        return new DeleteOneModel<>(filterDoc);
    } catch(Exception exc) {
        throw new DataException(exc);
    }

}
 
Example #27
Source File: RdbmsHandler.java    From kafka-connect-mongodb with Apache License 2.0 5 votes vote down vote up
@Override
public Optional<WriteModel<BsonDocument>> handle(SinkDocument doc) {

    BsonDocument keyDoc = doc.getKeyDoc().orElseGet(BsonDocument::new);

    BsonDocument valueDoc = doc.getValueDoc().orElseGet(BsonDocument::new);

    if (valueDoc.isEmpty())  {
        logger.debug("skipping debezium tombstone event for kafka topic compaction");
        return Optional.empty();
    }

    return Optional.ofNullable(getCdcOperation(valueDoc)
                        .perform(new SinkDocument(keyDoc,valueDoc)));
}
 
Example #28
Source File: MongoDbSinkTask.java    From kafka-connect-mongodb with Apache License 2.0 5 votes vote down vote up
private void processSinkRecords(MongoCollection<BsonDocument> collection, List<SinkRecord> batch) {
    String collectionName = collection.getNamespace().getCollectionName();
    List<? extends WriteModel<BsonDocument>> docsToWrite =
            sinkConfig.isUsingCdcHandler(collectionName)
                    ? buildWriteModelCDC(batch,collectionName)
                    : buildWriteModel(batch,collectionName);
    try {
        if (!docsToWrite.isEmpty()) {
            LOGGER.debug("bulk writing {} document(s) into collection [{}]",
                    docsToWrite.size(), collection.getNamespace().getFullName());
            BulkWriteResult result = collection.bulkWrite(
                    docsToWrite, BULK_WRITE_OPTIONS);
            LOGGER.debug("mongodb bulk write result: " + result.toString());
        }
    } catch (MongoException mexc) {
        if (mexc instanceof BulkWriteException) {
            BulkWriteException bwe = (BulkWriteException) mexc;
            LOGGER.error("mongodb bulk write (partially) failed", bwe);
            LOGGER.error(bwe.getWriteResult().toString());
            LOGGER.error(bwe.getWriteErrors().toString());
            LOGGER.error(bwe.getWriteConcernError().toString());
        } else {
            LOGGER.error("error on mongodb operation", mexc);
            LOGGER.error("writing {} document(s) into collection [{}] failed -> remaining retries ({})",
                    docsToWrite.size(), collection.getNamespace().getFullName() ,remainingRetries);
        }
        if (remainingRetries-- <= 0) {
            throw new ConnectException("failed to write mongodb documents"
                    + " despite retrying -> GIVING UP! :( :( :(", mexc);
        }
        LOGGER.debug("deferring retry operation for {}ms", deferRetryMs);
        context.timeout(deferRetryMs);
        throw new RetriableException(mexc.getMessage(), mexc);
    }
}
 
Example #29
Source File: MongoDbSinkTask.java    From kafka-connect-mongodb with Apache License 2.0 5 votes vote down vote up
List<? extends WriteModel<BsonDocument>>
                        buildWriteModelCDC(Collection<SinkRecord> records, String collectionName) {
    LOGGER.debug("building CDC write model for {} record(s) into collection {}", records.size(), collectionName);
    return records.stream()
            .map(sinkConverter::convert)
            .map(cdcHandlers.getOrDefault(collectionName,
                    cdcHandlers.get(MongoDbSinkConnectorConfig.TOPIC_AGNOSTIC_KEY_NAME))::handle)
            .flatMap(o -> o.map(Stream::of).orElseGet(Stream::empty))
            .collect(Collectors.toList());

}
 
Example #30
Source File: ReplaceOneDefaultStrategy.java    From kafka-connect-mongodb with Apache License 2.0 5 votes vote down vote up
@Override
public WriteModel<BsonDocument> createWriteModel(SinkDocument document) {

    BsonDocument vd = document.getValueDoc().orElseThrow(
            () -> new DataException("error: cannot build the WriteModel since"
                    + " the value document was missing unexpectedly")
    );

    return new ReplaceOneModel<>(
            new BsonDocument(DBCollection.ID_FIELD_NAME,
                    vd.get(DBCollection.ID_FIELD_NAME)),
            vd,
            UPDATE_OPTIONS);

}