A. itemsDf.write.avro(fileLocation)
B. itemsDf.write.format("avro").mode("ignore").save(fileLocation)
C. itemsDf.write.format("avro").mode("errorifexists").save(fileLocation)
D. itemsDf.save.format("avro").mode("ignore").write(fileLocation)
E. spark.DataFrameWriter(itemsDf).format("avro").write(fileLocation)
Explanation:
The trick in this QUESTION NO: is knowing the "modes" of the DataFrameWriter. Mode ignore will ignore if a file already exists and not replace that file, but also not throw an error. Mode errorifexists will throw an error, and is the default mode of the DataFrameWriter. The QUESTION NO: explicitly calls for the DataFrame to be "silently" written if it does not exist, so you need to specify mode("ignore") here to avoid having Spark communicate any error to you if the file already exists.
The `overwrite' mode would not be right here, since, although it would be silent, it would overwrite the already-existing file. This is not what the QUESTION NO: asks for.
It is worth noting that the option starting with spark.DataFrameWriter(itemsDf) cannot work, since spark references the SparkSession object, but that object does not provide the DataFrameWriter.
As you can see in the documentation (below), DataFrameWriter is part of PySpark's SQL
API, but not of its SparkSession API.
More info:
DataFrameWriter: pyspark.sql.DataFrameWriter.save ― PySpark 3.1.1 documentation
SparkSession API: Spark SQL ― PySpark 3.1.1 documentation
Static notebook | Dynamic notebook: See test 1,59.(Databricks import instructions)