Why use hashMap when ConcurrentHashMap is there… | Techartifact

ConcurrentHashMap is sort of hidden class. Not many people know about it and not many people care to use it. The class offers a very robust and fast (comparatively, we all know java concurrency isn’t the fastest) method of synchronizing a Map collection.

On the internet there is lot of article which tell difference between hashmap and ConcurrentHashMap. This class obeys the same functional specification as Hashtable, and includes versions of methods corresponding to each method of Hashtable. However, even though all operations are thread-safe, retrieval operations do not entail locking, and there is not any support for locking the entire table in a way that prevents all access. This class is fully inter-operable with Hashtable in programs that rely on its thread safety but not on its synchronization details.
Ideally we should not ask for difference.Its like comparing orange with apple.One is giving you synchronize feature and other one is note.

concurrentHashMap is thread-safe implementation of Map interface. In this class put and remove method are synchronized but not get method. This class is different from Hashtable in terms of locking; it means that hashtable use object level lock but this class uses bucket level lock thus having better performance.

If you use HashMap in your application, it is working perfectly in development or test environment ,but gave pain in production..
But obvious is—when heavy load, HashMap behaves starting weird. If we use HashTable.Hashtable’s offer concurrent access to their entries, with a small caveat, the entire map is locked to perform any sort of operation. While this overhead is ignorable in a web application under normal load, under heavy load it can lead to delayed response times and overtaxing of your server for no good reason.
This is where ConcurrentHashMap’s step in. They offer all the features of Hashtable with a performance almost as good as a HashMap. ConcurrentHashMap’s accomplish this by a very simple mechanism.Instead of Map’s lock ,the collection keep a list of 16 lock by default,each of which is used to guard (or lock on) a single bucket of the map. This effectively means that 16 threads can modify the collection at a single time (as long as they’re all working on different buckets). Infact there is no operation performed by this collection that locks the entire map. The concurrency level of the collection, the number of threads that can modify it at the same time without blocking, can be increased. However a higher number means more overhead of maintaining this list of locks.

Retrieval operations on a ConcurrentHashMap do not block unless the entry is not found in the bucket or if the value of the entry is null. In such a case the map synchronizes on the bucket and then tries to look for the entry again just in case the entry was put or removed right after the get in synchronized mode.Removal operations do require a bit of overhead. All removal operations require the chain of elements before and after to be cloned and joined without the removed element. Since the value of the map key is volatile (not really, the value of the inner Entry class is volatile) if a thread already traversing the bucket from which a value is removed reaches the removed element, it automatically sees a null value and knows to ignore such a value.

Different Hibernate object states and their lifecycle in Hibernate | Techartifact

There are 3 hibernate object state

1.) Persistent– Persistent object and collections are short lived single threaded objects, which store the persistence state. These objects synchronize their state with database depending on your flush strategy(i.e auto flush where as soon as setXXX() method is called or an item is removed from a set,list etc or define your own synchronization points with session.flush().transaction.commit() calls.) If you remove an item from persistence collections like a Set, it will be removed from database either immediately or when flush() or commit() is called depending on your flush strategy. They are plain old java objects(POJO) and are currently associated with session. As soon as the associated session is closed , persistence objects become detached objects and are free to use directly as data transfer objects in any application layer like business Layers, presentation layer etc.

2.) Detached – These objects and collection are instances of persistence objects that were associated with a session but currently not with associated with session. These objects can be freely used as Data Transfer Objects without having any impact on your database .Detached objects can be later on attached to another session by calling methods like session.update(), session.saveOrUpdate() etc and become persistence objects.

3.) Transient – These objects and collection are instance of persistence object that were never associated with session.These objects can be freely used as Data transfer objects without having any impact on your database. Transient objects become persistent objects when associated to session by calling session.save(), session.persist() etc.

4.) Removed State -A previously persistent object that is deleted from the database session.delete(account).Java instance may still exist, but it is ignored by Hibernate -Any changes made to the object are not saved to the database Picked up for garbage collection once it falls out
of scope
• Hibernate does not null-out the in-memory object

Difference between HashSet and TreeSet in Java | Techartifact

HashSet:

– Class offers constant time performance for the basic operations (add, remove, contains and size).
– It does not guarantee that the order of elements will remain constant over time
– Iteration performance depends on the initial capacity and the load factor of the HashSet.
– It’s quite safe to accept default load factor but you may want to specify an initial capacity that’s about twice the size to which you expect the set to grow.
– The underlying data structure is Hashtable
– Heterogeneous objects are allowed
– Insertion order is not preserved and it is based on hashcode of the objects.
– null insertion is possible.

TreeSet:

– TreeSet class has a member reference variable of type NavigableMap. In fact, TreeSet make use of unique key property of Map’s to ensure no duplicate elements. There is a dummy value used for this instance member variable .
-The underlying data structure is balanced tree.
– Guarantees log(n) time cost for the basic operations (add, remove and contains)
– Heterogeneous objects are not allowed by defalut.
– Insertion order is not preserved and all the objects are inserted according to some sorting order.
– As the first element only null insertion is possible and in all other cases we will get NullPointerException (After null insertion other insertion not possible)
– Guarantees that elements of set will be sorted (ascending, natural, or the one specified by you via it’s constructor)
– Doesn’t offer any tuning parameters for iteration performance
– Offers a few handy methods to deal with the ordered set like first(), last(), headSet(), and tailSet() etc
– we can construct a constructor your own rules for what the order should be using a comparable or comparator.

The TreeSet implementations useful when you need to extract elements from a collection in a sorted manner. It is generally faster to add elements to the HasSet then convert the collection to a TreeeSet for sorted traversal.

To optimize HashSet space usage , you can tune initial capacity and load factor. TreeSet has no tuning options, as the tree is always balanced, ensuring log(n0 performance for insertions, deletions and queries.