The following algorithm takes as input an array, and returns the array with all the duplicate elements removed. For example, if the input array is {77, 78, 78, 64, 39, 78, 39}, the algorithm returns {77, 78, 64, 39}.
S = new empty set
D = new empty dynamic array
for every element x in input array
if not S.member(x) then
S.insert(x)
D.append(x)
return D
What is the big-O complexity of this algorithm, if the set is
implemented as
a) A hash table?
b) A Balanced BST .Give reasons.
a)For hash table in worst case it takes 0(n) time
When there are collisions and to resolve them if we use linked list method(chaining) we have to search till end of the list to know if the new element is duplicate or not
If there are no collisions time taken is o(1)
b)for balanced BST worst case is 0(height of the tree).
when a new element is being inserted we may traverse till the leaves of the tree in worst case
height of bst is o(log n).
Therefore for finding whether the new element being inserted is duplicate time taken is o(log n)
Get Answers For Free
Most questions answered within 1 hours.