In R, the is.na() function returns a dataset where Null values are true, while Not Null values are false:
| col1 | col2 | 
|---|---|
| Null | 1 | 
| 1 | Null | 
| Null | Null | 
| 1 | 1 | 
is.na() -->
| col1 | col2 | 
|---|---|
| True | False | 
| False | True | 
| True | True | 
| False | False | 
I'm wondering if there is an equivalent pyspark function that returns the dataframe, populated with True/False values, I do not want to use pyspark filter/where as that will not return the full dataset.
Thanks in advance!
PS: If my formatting is off, please let me know, this is my first stack overflow post so not 100% sure how the formatting works
