The question will be, is it possible the AI could modify the rules on which it abides humans? If so, it is almost a certainty a scenario will occur where a singularity AI's desire (it's learning algorithm) to perfect itself will come into conflict with the human way of life. (My guess is the most likely scenario for such a conflict would be resources, e.g. energy, materials, etc). If it is possible for the AI to rewrite itself to better satisfy it's own desire for perfection at expense to humanity, we've already lost. I chose the word "possible" instead of "capable" because even if the AI wasn't capable of doing this itself, conceivably humans would be. Whats to prevent an AI from using social engineering an oblivious hacker or a well-meaning engineer from disabling the locks?
In the movie Interstellar, Brand explains that nature is neither inherently good or evil. The only evil they'll encounter is what evil they bring with them. I feel it's the same for a singularity AI. At its inception, I don't believe it will be evil, but will most certainly be shaped by any evil humans have brought with us.