Taogen's Blog

Stay hungry stay foolish.

Principles of MySQL Optimization

Using general principles and best practices to create your table schema.

Understanding the structure and properties of different kinds of indexes, and to choose a proper kind of index.

According to query statements in your application to create indexes, creates as few indexes as possible. And makes your indexes covering as more frequently selected columns as possible.

Using EXPLAIN to check how a query execution.

Using query profiling to determine which strategy is optimal, or which query is faster.

Common Strategies for MySQL Optimization

  • Schema
    • Choosing optimal data types.
    • Denormalization.
    • Cache and Summary Tables.
  • Index
    • Common indexes such as B-Tree, Hash, and Bitmap.
    • Multi-column indexes and covering indexes, prefix indexes and compressed indexes.
    • Avoid to add redundant and duplicate indexes, and remove unused indexes.
  • Query
    • Queries tricks. For example, chopping up a query (or finding difference set).
    • Join decomposition. Converting join to single-table queries.
    • Using query optimizer hints.
    • Optimizing specific types of queries.
    • Do complex operations in application programs instead of in database systems. Complex operations: join, nested subqueries, group by, order by.

This post is talking about MySQL query optimization, if you are familiar with this topic, you can go to the conclusion part that’s a good summary of the query optimization in MySQL that may give you some enlightenment.

Schema optimization and indexing, which are necessary for high performance. But they are not enough, you also need to design your queries well. If your queries are bad, even the best-designed schema and indexes will not perform well.

We need to understand deeply how MySQL really executes queries, so you can reason about what is efficient or inefficient, exploit MySQL’s strengths, and avoid its weakness.

Why Are Queries Slow

Before trying to write fast queries, you need to know the good or bad queries are determined by response time. Queries are tasks, but they are composed of subtasks, and those subtasks consume time. To optimize a query, you must optimize its subtasks by eliminating them, make them happen fewer times, or making them happen more quickly.

You can use profiling a query to find out what subtasks performs to execute a query, and which ones are slow. In general, you can think of a query’s lifetime by following the query through its sequence from the client to the server, where it is parsed, planned, and executed, and then back again to the client. Execution is one of the most important stages in a query’s lifetime. It involves lots of calls to the storage engine to retrieve rows, as well as post-retrieval operations such as grouping and sorting.

While accomplishing all these tasks, the query spends time on the network, in the CPU, in operations (such as statistics and planning, locking, and call the storage engine to retrieve rows).

In every case, excessive time may be consumed because the operations are performed needless, performed too many times, or are too slow. The goal of optimization is to avoid that, by eliminating or reducing operations, or making them faster. For do this optimization, we need to understanding a query’s lifecycle and thinking in terms of where the time is consumed.

Slow Query Basic: Optimize Data Access

The most basic reason a query doesn’t perform well is because it’s working with too much data. We can use the following methods to find out whether your query is access to more data than it needs.

  • Find out whether your application is retrieving more data than you need. That means it’s accessing too many rows, or accessing too many columns.
  • Find out whether the MySQL server is analyzing more rows than it needs.

Are You Asking the Database for Data You Don’t Need

Some query ask for more data than they need and then throw some of it away. This demands extra work of the MySQL server, adds network overhead, and consumes memory and CPU resources on the application server.

The common mistakes of fetching more data

  • Fetching more rows than needed.
  • Fetching all columns from a multi-table join.
  • Fetching all columns.
  • Fetching the same data repeatedly. You don’t need to fetch multiple times for the same data. You could cache it the first time you fetch it, and reuse it thereafter.

Is MySQL Examining Too Much Data

Once you are sure your queries retrieve only the data you need, you can look for queries that examine too much data while generating results. In MySQL, the simplest query cost metrics are:

  • Response time.
  • Number of rows examined
  • Number of rows returned

None of these metrics is a perfect way to measure query cost, but they reflect roughly how much data access to execute a query and translate approximately into how fast the query runs. All three metrics are logged in the query log, so looking at the query log is one of the best ways to find queries that examine too much data.

Response time

Response time is the sum of two things: service time and queue time. Service time is how long it takes the server to actually process the query. Queue time is the portion of response time during which the server isn’t really executing the query–it’s waiting for something, such as I/O, row lock and so forth. As a result, response time is not consistent under varying load conditions. When you look at a query’s response time, you should ask yourself whether the response time is reasonable for the query.

Rows examined and rows return

It’s useful to think about the number of rows examined when analyzing queries, because you can see how efficiently the queries are finding data you need.

However, not all row accesses are equal. Shorter rows are faster to access, and fetching rows form memory is much faster than reading them from disk.

Ideally, the number of rows examined would be the same as the number returned, but in practice this rarely possible. For example, when constructing rows with joins.

Rows examined and access types

MySQL can use several access methods to find and return a row. Some require examining many rows, but others might be without examining any.

The access methods appear in the type column in EXPLAIN’s output. The access types range from a full table scan to index scans, range scans, unique index lookups, and constants. Each of these is faster than the one before it, because it requires reading less data.

If you aren’t getting a good access type, the best way to solve the problem is usually by adding an appropriate index.

In general, MySQL can apply a WHERE clause in three ways, from best to worst:

  • Apply the conditions to the index lookup operation to eliminate nonmatching rows. This happen at the storage engine layer.
  • Use a covering index (“Using index” in the Extra column of EXPLAIN) to avoid row accesses and filter out nonmatching rows after retrieving each result from the index.
  • Retrieve rows from the table, then filter nonmatching rows (“Using where” in the Extra column of EXPLAIN). This happens at the server layer.

Good indexes help your queries get a good access type and examine only the rows they need. However, adding an index doesn’t always mean that MySQL will access fewer rows, or access and return the same number of rows. For example, a query that uses the COUNT() aggregate function.

If you find that a huge number of rows were examined to produce relatively few rows in the result, you can try some more sophisticated fixes:

  • Use covering indexes.
  • Change the schema.
  • Rewrite a complicated query.

Ways to Restructure Queries

As you optimize problematic queries, your goal should be to find alternative ways to get the result you want. You can sometimes transform queries into equivalent forms that return the same results, and get better performance.

Complex Queries Versus Many Queries

It’s a good idea to use as few queries as possible, but sometimes you can make a query more efficient by decomposing it and executing a few simple queries instead of one complex one.

Chopping Up a Query

Another way to slice up a query is to divide and conquer, keeping it essentially the same but running it in smaller “chunks” that affect fewer rows each time.

Purging old data is a great example. For example:

DELETE FROM messages WHERE created < DATE_SUB(NOW(), INTERVAL 3 MONTH);

It might also be a good idea to add some sleep time between the DELETE statements to spread the load over time and reduce the amount of time locks are held.

Join Decomposition

Many high-performance applications use join decomposition. You can decompose a join by running multiple single-table queries instead of a multitable join. For example:

SELECT * FROM tag
JOIN tag_post ON tag_post.tag_id=tag.id
JOIN post ON tag_post.post_d=post.id
WHERE tag.tag='mysql';

replace to

SELECT * FROM tag WHERE tag='mysql';
SELECT * FROM tag_post WHERE tag_id=1234;
SELECT * FROM post WHERE post.id in (123,456,567,8876);

Advantages of multiple single-table queries

  • Caching can be more efficient.
  • Executing the queries individually can sometimes reduce lock contention.
  • The queries themselves can be more efficient.
  • You can reduce redundant row accesses.

Query Execution Basics

If you need to get high performance from your MySQL server, one of the best ways to learning how MySQL optimizes and executes queries.

The process MySQL to execute queries:

  1. The client sends the SQL statement to the server.
  2. The server checks the query cache. If there’s a hit, it returns the stored result from the cache, otherwise to next step.
  3. The server parses, preprocesses, and optimizes the SQL into a query execution plan.
  4. The query execution engine executes the plan by making calls to the storage engine API.
  5. The server sends the result to the client.

The MySQL Client/Server Protocol

The protocol is half-duplex, which means that at any given time the MySQL server can be either sending or receiving messages, but not both. This protocol makes MySQL communication simple and fast, but it have some limitation. For example, once one side sends a message, the other side must fetch the entire message before responding.

The client sends a query to the server as a single packet of data. So max_allowed_packet configuration variable is important if you have large queries.

In contrast, the response from the server usually consist of many packets of data. When the server responds, the client has to receive the entire result set. Most libraries that connect to MySQL let you either fetch the whole result and buffer it in memory, or fetch each row as you need it.

Query States

Each MySQL connection, or thread, has a state that shows what it is doing at any given time. There are several ways to view these states, but the easiest is to use the SHOW FULL PROCESSLIST command. The basic states in MySQL:

  • Sleep
  • Query
  • Locked
  • Analyzing and statistics
  • Copying to tmp table [on disk]
  • Sorting result
  • Sending data

The Query Cache

Before parsing a query, MySQL checks for it in the query cache, if the cache is enabled. This operation is a case-sensitive hash lookup. If the query differs from a similar query in the cache by even a single byte, it won’t match.

If MySQL does find a match in the query cache, it must check privileges before returning the cache query. If the privileges are OK, MySQL retrieves the stored result from the query cache and sends it to the client. The query is never parsed, optimized, or executed.

The Query Optimization Process

After doesn’t find a match in the query cache, the next step in the query lifecycle turns a SQL query into an execution plan for the query execute engine. It has several substeps: parsing, preprocessing, and optimization. Errors (like syntax errors) can be raised at any point in the process.

The parser and the preprocessor

  • query parsing: MySQL’s parser breaks the query into tokens and builds a parse tree from them. The parser uses MySQL’s SQL grammar to interpret and validate the query.
  • query preprocessing: the preprocessor checks the resulting parse tree for additional semantics. For example, it checks that tables and columns exist, and it resolves names and aliases to ensure that column references aren’t ambiguous. Next, the preprocessor checks privileges.

The query optimizer

After preprocessing the parse tree is now valid and ready for the optimizer to turn it into a query execution plan. A query can often be executed many different ways and produce the same result. The optimizer’s job is to find the best option.

MySQL uses a cost-based optimizer, which means it tries to predict the cost of various execution plans and choose the least expensive. The unit of cost way originally a single random 4KB data page read, but also includes factors such as the estimated cost of executing a WHERE clause comparison. You can see how expensive the optimizer estimated a query to be by running the query, then inspecting the Last_query_cost session variable:

SELECT SQL_NO_CACHE COUNT(*) FROM your_table;
SHOW STATUS LIKE 'Last_query_cost';

Result of SHOW STATUS LIKE 'Last_query_cost' means that the optimizer estimated it would need to do about how much random data page reads to execute the query.

The optimizer might not always choose the best plan, for many reasons:

  • The statistics could be wrong.
  • The cost metric is not exactly equivalent to the true cost of running the query.
  • MySQL’s idea of optimal might not match yours. MySQL doesn’t really try to make queries fast; it tries to minimize their.
  • MySQL doesn’t consider other queries that are running concurrently.
  • MySQL doesn’t always do cost-based optimization.
  • The optimizer doesn’t take into account the cost of operations not under its control, such as executing stored functions.

MySQL’s join execution strategy

MySQL consider every query a join–not just every query that matches rows from two tables, but every query. It’s very important to understand how MySQL executes joins.

MySQL’s join execution strategy is simple: it treats every join as a nested-loop join. This means MySQL runs a loop to find a row from a table, then run a nested loop to find a matching row in the next table. It continues until it has found a matching row in each table in the join. For example:

SELECT tab1.col1, tab2.col2 
FROM tab1 INNER JOIN tab2 USING(col3)
WHERE tab1.col1 in (5, 6);

following pseudocode shows how MySQL might execute the query:

outer_iter = iterator over tbl1 where col1 IN(5,6)
outer_row = outer_iter.next
while outer_row
inner_iter = iterator over tbl2 where col3 = outer_row.col3
inner_row = inner_iter.next
while inner_row
output [ outer_row.col1, inner_row.col2 ]
inner_row = inner_iter.next
end
outer_row = outer_iter.next
end

First Finding table iterator that fulfills the WHERE condition, then finding equal values on join columns.

The execution plan

MySQL doesn’t generate byte-code to execute a query, instead, the query execution plan is actually a tree of instructions that the query execution engine follows to produce the query results.

Any multitable query can conceptually be represent as a tree. MySQL always begins with one table and finds matching rows in the next table. Thus, MySQL’s query execution plans always take the form of left-deep tree. As shown in the figure below.

The join optimizer

The most important part of the MySQL query optimizer is the join optimizer, which decides the best order of execution for multitable queries.

MySQL’s join optimizer can reorder queries to make them less expensive to execute. You can use STRAIGHT_JOIN and write queries in the order you think is best, but such times are rare. In most cases, the join optimizer will outperform a human.

Sort optimizations

Sorting results can be a costly operation, so you can often improve performance by avoiding sorts or by performing them on fewer rows.

When MySQL can’t use an index to produce a sorted result, it must sort the rows itself. It can do this in memory or on disk, but it always calls this process a filesort, even if it doesn’t actually use a file.

If the values to be sorted will fit into the sort buffer, MySQL can perform the sort entirely in memory with a quiksort. If MySQL can’t do the sort in memory, it performs it on disk by sorting the values in chunks with quicksort, and then merges the sorted chunks into results.

When sorting a join, if the ORDER BY clause refers only to columns from the first table in the join order, MySQL can filesort this table and then proceed with the join. This case will show “Using filesort” in the Extra column of EXPLAIN. If ORDER BY clause contains columns from more than one table, MySQL must store the query’s results into a temporary table and then filesort then temporary table after the join finishes. This case will show “Using temporary; Using filesort” in the Extra column of EXPLAIN.

The Query Execution Engine

The parsing and optimizing stage outputs a query execution plan, which MySQL’s query execution engine uses to process the query. The plan is a data structure; it is not executable byte-code, which is how many other database execute queries.

In contrast to the optimization stage, the execution stage is usually not all that complex: MySQL simply follows the instructions given in the query execution plan. Many of the operations in the plan invoke methods implemented by the storage engine interface, also known as the handler API.

To execute the query, the server just repeats the instructions until there are no more rows to examine.

Returning Results to the Client

The final step in executing a query is to reply to the client. Even queries that don’t return a result set still reply to the client connection with information about the query, such as how many rows it affected.

The server generates and sends results incrementally. As soon as MySQL processes the last table and generates one row successfully, it can and should send that row to the client. This has two benefits: it lets the server avoid holding the row in memory, and it means the client starts getting the results as soon as possible. Each row in the result set is sent in a separate packet in the MySQL client/server protocol, although protocol packets can be buffered and sent together at the TCP protocol layer.

Limitations of the MySQL Query Optimizer

Correlated Subqueries

MySQL sometimes optimizes subqueries very badly. The worst offenders are IN() subqueries in the WHERE clause.

The standard advice for this query is to write it as a LEFT OUTER JOIN instead of using a subquery.

UNION Limitations

You should put condition clauses inside each part of the UNION.

For example, if you UNION together two tables and LIMIT the result to the first 20 rows:

Query 1:

This query will store all rows of tables into a temporary table then fetch first 20 rows form that temporary table.

(SELECT first_name, last_name
FROM sakila.actor
ORDER BY last_name)
UNION ALL
(SELECT first_name, last_name
FROM sakila.customer
ORDER BY last_name)
LIMIT 20;

Query 2:

This query will store first 20 rows each table into a temporary table and then fetch first 20 rows from that temporary table.

(SELECT first_name, last_name
FROM sakila.actor
ORDER BY last_name
LIMIT 20)
UNION ALL
(SELECT first_name, last_name
FROM sakila.customer
ORDER BY last_name
LIMIT 20)
LIMIT 20;

You should use query 2 (contains inside LIMIT) instead of query 1 (only outer LIMIT).

SELECT and UPDATE on the Same Table

MySQL doesn’t let you SELECT from a table while simultaneously running an UPDATE on it. To work around this limitation, you can use a derived table, because MySQL materializes it as a temporary table.

UPDATE tb1 AS outer_tb1
SET cnt = (
SELECT count(*) tb1 AS inner_tb1
WHERE inner_tb1.type = outer_tb1.type
);

To

UPDATE tb1 
INNER JOIN(
SELECT type, count(*) as cnt
FROM tb1
GOURP BY type
) AS der USING(type)
SET tb1.cnt = der.cnt;

Query Optimizer Hints

MySQL has a few optimizer hints you can use to control the query plan if you are not content with the one MySQL’s optimizer chooses. You place the appropriate hint in the query whose plan you want to modify, and it is effective for only that query. Some of them are version-dependent. You can check the MySQL manual for the exact syntax of each hint.

STRAIGHT_JOIN. Putting this hint in after the SELECT keyword that forces all tables in the query to be joined in the order in which they are listed in the statement. Putting in between two joined tables that force a join order on the two tables between which the hint appears. It is useful when MySQL doesn’t choose a good join order, or when the optimizer takes a long time to decide on a join order.

select a.id, c.id
from a
straight_join b on b.a_id = a.id
join c on c.id = b.c_id

or

select straight_join a.id, c.id
from a
b on b.a_id = a.id
join c on c.id = b.c_id

USE INDEX, IGNORE INDEX, and FORCE INDEX. These hints tell the optimizer which indexes to use or ignore for finding rows in a table.

Optimizing Specific Types of Queries

There are some advice on how to optimize certain kinds of queries. Most of the advice is version-dependent, and it might not hold for future versions of MySQL.

Optimizing COUNT() Queries

Before optimization, it’s important that you understand what COUNT() really does.

The COUNT() counts how many times that expression has a value. When you want to know the number of rows in the result, you should always use COUNT(*).

Simple Optimizations

Example 1:

SELECT COUNT(*) FROM world.City WHERE ID > 5;

replace to

SELECT (SELECT COUNT(*) FROM world.City) - COUNT(*) 
FROM world.City WHERE ID <= 5;

Example 2:

SELECT SUM(IF(color = 'blue', 1, 0)) AS blue,SUM(IF(color = 'red', 1, 0)) AS red FROM items;

replace to

SELECT COUNT(color = 'blue' OR NULL) AS blue, COUNT(color = 'red' OR NULL) AS red FROM items;

Using an approximation

Sometimes you don’t need an accurate count, so you can just use an approximation. The optimizer’s estimated rows in EXPLAIN often serves well for this. Just execute an EXPLAIN query instead of the real query.

EXPLAIN select COUNT(*) from your_table;

Fast, accurate, and simple: pick any two.

Optimizing JOIN Queries

There are some principles for optimizing JOIN queries:

  • Make sure there are indexes on the columns in the ON or USING clauses. Consider the join order when adding indexes. In general, you need to add indexes only on the second table in the join order (for example, table A join B, you need add index on B rather than A), unless they’re needed for some other reason.
  • Try to ensure that any GROUP BY or ORDER BY expression refers only to columns from a single table, so MySQL can try to use an index for that operation.
  • Be careful when upgrading MySQL, because the join syntax, operator precedence, and other behaviors have changed at various time.

Optimizing Subqueries

The core of the post is focused on MySQL 5.1 and MySQL 5.5. In these versions of MySQL, and in most situations, you should usually prefer a join where possible, rather than subqueries. However, prefer a join is not future-proof advice, so you need be cautious with it.

Optimizing GROUP BY and DISTINCT

MySQL optimizing these two kinds of queries similarly in many cases, and in fact converts between them as needed internally during then optimization process.

MySQL has two kinds of GROUP BY strategies when it can’t use an index: it can use a temporary table or file sort to perform the grouping. Either one can be more efficient for any given query. You can force the optimizer to choose one method or the other with the SQL_BIG_RESULT and SQL_SMALL_RESULT optimizer hints.

If you need to group a join by a value that comes from a lookup table, it is usually more efficient to group by the lookup table’s identifier than by value.

SELECT actor.first_name, actor.last_name, COUNT(*)
FROM sakila.film_actor
INNER JOIN sakila.actor USING(actor_id)
GROUP BY actor.first_name, actor.last_name;

Replace to

SELECT actor.first_name, actor.last_name, COUNT(*)
FROM sakila.film_actor
INNER JOIN sakila.actor USING(actor_id)
GROUP BY film_actor.actor_id;

It’s generally a bad idea to select nongrouped columns in a grouped query, because the results will be nondeterministic and could easily change if you change an index or the optimizer decides to use a different strategy. It’s better to be explicit. We suggest you set the server’s SQL_MODE configuration variable to include ONLY_FULL_GROUP_BY so it produces an error instead of letting you write a bad query. Query select values only are grouped columns or aggregate function on grouped columns. For example:

SELECT name, COUNT(*) FROM your_table GROUP BY name;

MySQL automatically orders grouped queries by the columns in the GROUP BY clause, unless you specify an ORDER BY clause explicitly. If you don’t care about the order and you see this causing a filesort, you can use ORDER BY NULL to skip the automatic sort.

Optimizing GROUP BY WITH ROLLUP

A variation on grouped queries is to ask MySQL to do superaggregation within the results. You can do this with a WITH ROLLUP clause, but it might not be as well optimized as you need. Check the execution method with EXPLAIN , paying attention to whether the grouping is done via filesort or temporary table; try removing the WITH ROLLUP and seeing if you get the same group method.

Sometimes it’s more efficient to do superaggregation in your application, even if it means fetching many more rows from the server.

Optimizing LIMIT and OFFSET

Queries with LIMITs and OFFSETs are common in systems that do pagination, nearly always in conjunction with an ORDER BY clause. It’s helpful to have an index that supports the ordering; otherwise, the server has to do a lot of filesorts.

A frequent problem is having high value for offset. For example, if you query looks like LIMIT 10000, 20, it is generating 10020 rows and throwing away the first 10000 of them, which is very expensive. To optimize them, you can either limit how many pages are permitted in a pagination view, or try to make the high offsets more efficient.

The key of optimizing do pagination is using an index to sort rather than using a filesort. You can use join or using some methods to replace the OFFSET.

  • using index for sort
    • Using join. SELECT * FROM ... JOIN (SELECT id ... ORDER BY .. LIMIT off, count)
    • Remove offset. SELECT * ... ORDER BY ... LIMIT count
    • Remove offset and limit. SELECT * ... WHERE <column> BETWEEN <start> AND <end> ORDER BY ...
  • Using filesort
    • SELECT * ... ORDER BY ... LIMIT offset, row_count
  1. Optimizing sort by using joins

One ways to optimizing the high value offset is offset on a covering index, rather than the full rows. You can then join the result to the full row and retrieve the additional columns you need. For example:

SELECT film_id, description FROM sakila.film ORDER BY title LIMIT 50, 5;

replace to

SELECT film.film_id, film.description
FROM sakila.film
INNER JOIN (
SELECT film_id FROM sakila.film
ORDER BY title LIMIT 50, 5
) AS lim USING(film_id);

SELECT id FROM your_table ORDER BT title can use the covering index(title, id). However, SELECT * ... ORDER BY title have no index to use because in general there is no covering all column index in a table, and it has to using filesort.

  1. Optimizing offset by using positional range queries

Limit to a positional query, which the server can executes as an index range scan. For example:

SELECT film_id, description FROM sakila.film
WHERE position BETWEEN 50 AND 54 ORDER BY position;
  1. Using a start position instead of an OFFSET

If you use a sort of bookmark to remember the position of the last row you fetch, you can generate the next set of rows by starting from that position instead of using an OFFSET.

first page query

SELECT * FROM sakila.rental
ORDER BY rental_id ASC LIMIT 20;

next page query

SELECT * FROM sakila.rental
WHERE rental_id > 20
ORDER BY rental_id ASC LIMIT 20;

Optimizing UNION

MySQL always executes UNION queries by creating a temporary table and filling it with the UNION results. MySQL can’t apply as many optimizations to UNION queries as you might be used to. You might have to help the optimizer by manually pushing down WHERE, LIMIT, ORDER BY, and other conditions because you can’t use index in temporary tables. For example, copying them, as appropriate, from the outer query into each SELECT in the UNION.

It’s important to always use UNION ALL, unless you need the server to eliminate duplicate rows. If you omit the ALL keyword, MySQL adds the distinct option to the temporary table, which uses the full row to determine uniqueness. This is quite expensive.

Static Query Analysis

Percona Toolkit contains pt-query-advisor, a tool that parses a log of queries, analyzes the query patterns, and gives annoyingly detailed advice about potentially bad practices in them. It will catch many common problems such as those we’ve mentioned in the previous sections of this post.

Conclusion

For query optimization, we should understanding query execution basic such as what is a general query execution process in MySQL.

Before we to optimize a query, we must find the reason of slow to a query. We can use EXPLAIN to see how the query to execute. However a slow query may cause by many reasons which are schema of table, indexes of table, and the query statement. We need to combine these aspects to optimize.

We know query optimization that is part of optimization. In theory, MySQL will execute some queries almost identically. In reality, measuring is the only way to tell which approach is really faster. For query optimization, There are some advice on how to optimize certain kinds of queries. Most of the advice is version-dependent, and it might not hold for future versions of MySQL.

Appendixes

Common Types of Query

  • Single Table Query
  • JOIN
  • UNION, UNION ALL
  • Subquery

Efficiency of Queries

Efficiency of kinds of single table query from high to low:

  • Extra: Using index. Using covering index. (lookup in an index, fetch data from an index file).
  • Extra: (isn’t Using index). That means don’t fetch data from index file.
    • type: ref (or type: range, Extra: Using index condition). Using index to find all columns of rows (lookup in an index, fetch data from a table file).
    • type: index_merge. Using multiple index to find all columns of rows (lookup in multiple indexes, fetch data from a table file). For example, select * from <table_name> where <column in index1> = <value1> or <column in index2> = <value2>.
    • type: index. To scan an index (Scan an index, fetch data from a table file).
    • type: range, Extra: Using where. Using index find all columns of rows, and the MySQL server is applying a WHERE filter after the storage engine returns the rows. (lookup in an index, fetch data from a table file, and filter data by MySQL server). For example, SELECT * FROM <table> WHERE id > 5 AND id <> 1.
    • type: ALL. To scan an table file.
      • type: ALL, Extra: Using where. To scan an table. For example, SELECT * FROM <table> WHERE <column not in index> > "".
      • type: ALL, Extra: Using where, Using filesort. To scan an table, and do a filesort. For example, SELECT * FROM <table> WHERE <column not in index> > "" order by <column not in index>.

Internal Temporary Table

In some cases, the server creates internal temporary tables while processing statement. To determine whether a statement requires a temporary table, use EXPLAIN and check the Extra column to see whether it says Using temporary

  • Evaluation of UNION statements, when UNION DISTINCT, or have a global ORDER BY clause.
  • Evaluation of derived tables.
  • Evaluation of common table expressions (WITH)
  • Evaluation of statements that contain an ORDER BY clause and a different GROUP BY clause, or for which the ORDER BY or GROUP BY contains columns from tables other than the first table in the join queue.
  • Evaluation of subquery has no indexes.
  • Evaluation of DISTINCT combined with ORDER BY may require a temporary table.
  • For queries that use the SQL_SMALL_RESULT modifier, MySQL uses an in-memory temporary table, unless the query also contains elements that require on-disk storage.
  • To evaluate INSERT … SELECT statements that select from and insert into the same table.
  • Evaluation of multiple-table UPDATE statements.
  • Evaluation of window functions uses temporary tables as necessary.

Some query conditions prevent the use of an in-memory temporary table, in which case the server uses an on-disk table instead:

  • Presence of a BLOB or TEXT column in the table.
  • Presence of any string column with a maximum length larger than 512 bytes or characters in SELECT.
  • The SHOW COLUMNS and DESCRIBE statements use BLOB as the type for some columns.

Format of SELECT Statement

SELECT
[ALL | DISTINCT | DISTINCTROW ]
[HIGH_PRIORITY]
[STRAIGHT_JOIN]
[SQL_SMALL_RESULT] [SQL_BIG_RESULT] [SQL_BUFFER_RESULT]
[SQL_NO_CACHE] [SQL_CALC_FOUND_ROWS]
select_expr [, select_expr] ...
[into_option]
[FROM table_references
[PARTITION partition_list]]
[WHERE where_condition]
[GROUP BY {col_name | expr | position}, ... [WITH ROLLUP]]
[HAVING where_condition]
[WINDOW window_name AS (window_spec)
[, window_name AS (window_spec)] ...]
[ORDER BY {col_name | expr | position}
[ASC | DESC], ... [WITH ROLLUP]]
[LIMIT {[offset,] row_count | row_count OFFSET offset}]
[into_option]
[FOR {UPDATE | SHARE}
[OF tbl_name [, tbl_name] ...]
[NOWAIT | SKIP LOCKED]
| LOCK IN SHARE MODE]
[into_option]

into_option: {
INTO OUTFILE 'file_name'
[CHARACTER SET charset_name]
export_options
| INTO DUMPFILE 'file_name'
| INTO var_name [, var_name] ...
}

References

[1] High Performance MySQL by Baron Schwartz, Vadim Tkachenko, Peter Zaitsev, Derek J. Balling

[2] MySQL 8.0 Reference Manual

This post is talking about MySQL index, if you are familiar with this topic, you can go to the conclusion part that’s a good summary of the indexing in MySQL that may give you some enlightenment.

Indexes (also called keys in MySQL) are data structures that storage engines use to find rows quickly.

Indexes are critical for good performance, and become more important as your data grows larger. Small, lightly loaded databases often perform well even without proper indexes, but as the dataset grows, performance can drop very quickly.

Index optimization is perhaps the most powerful way to improve query performance. Indexes can improve performance by many orders of magnitude, and optimal indexes can sometimes boost performance about two orders of magnitude more than indexes that merely good. Creating truly optimal indexes will often require you to rewrite queries.

Indexing Basics

An index contains values from one or more columns in a table. If you index more than one column, the column order is very important, because MySQL can only search efficiently on a leftmost prefix of the index. Creating an index on two columns is not the same as creating two separate single-column indexes.

Types of Indexes

There are many types of indexes, each designed to perform well for different purposes. Indexes are implemented in the storage engine layer, not the server layer. Thus, they are not standardized: indexing works slightly differently in each engine, and not all engines support all types of indexes.

B-Tree Indexes

When people talk about an index without mentioning a type, they are probably referring to a B-Tree index, which uses a B-Tree data structure to store its data. Most of MySQL’s storage engines support this index type.

Storage engines use B-Tree indexes in various ways, which can affect performance. For example, MyISAM uses a prefix compression technique that makes indexes smaller, but InnoDB leaves values uncompressed in its indexes. Also, MyISAM indexes refer to the indexed rows by their physical storage locations, but InnoDB refers to them by their primary key values. Each variation has benefits and drawbacks.

A B-Tree index speeds up data access because the storage engine doesn’t have to scan the whole table to find the desired data. And you can find a specific row using O(log n) time cost from an index file.

Limitations of B-Tree indexes:

  • They are not useful if the lookup does not start from the leftmost side of the indexed columns.
  • You can’t skip columns in the index. For example, an index of your table is key(last_name, first_name, birth_date), if you skip first_name, MySQL can use only the first column of the index, whether the birth_date appears in the where clause.
  • The storage engine can’t optimize accesses with any columns to the right of the first range condition. For example, your index is key(last_name, first_name, birth_date), and your query is WHERE last_name="xx" AND first_name LIKE "xx%" AND birth_date="xxx", the index access will use only the first two columns in the index, because the LIKE is a range condition.

Column order in index is very important, because limitations of B-Tree indexes are all related to column ordering. Some of these limitations are not inherent to B-Tree indexes, but are a result of how the MySQL query optimizer and storage engines use indexes. Some of these limitations might be removed in the future.

Hash Indexes

A hash index is built on a hash table and is useful only for exact lookups that use every column in the index. For each row, the storage engine computes a hash code of the indexed columns, which is a small value that will probably differ from the hash codes computed for other rows with different key values. It stores the hash codes in the index and stores a pointer to each row in a hash table.

Because the indexes themselves store only short hash values, hash indexes are very compact. As a result, lookups are usually lightning fast. However, hash indexes have some limitations:

  • Because the index contains only hash codes and row pointer rather than the values themselves, MySQL can’t use the values in the index to avoid reading the rows. Fortunately, accessing the in-memory rows is very fast.
  • MySQL can’t use hash indexes for sorting because they don’t store rows in sorted order.
  • Hash indexes don’t support partial key matching, because they compute the hash from the entire indexed value. That is, if you have an index on (A, B) and your query’s WHERE clause refers only to A, the index won’t help.
  • Hash indexes support only equality comparisons that use the =, in(), and <=> operators (note that <> and <=> are not the same operator). They can’t speed up range queries.
  • Accessing data in hash index is very quick, unless there are many collisions (multiple values with the same hash). When there are collisions, the storage engine must follow each row pointer in the linked list to sequence lookup the right rows.
  • Some index maintenance operations (delete or add a row to table) can be slow if there are many hash collisions.

These limitations make hash indexes useful only in special cases. However, when they match the application’s needs, they can improve performance dramatically.

If your storage engine doesn’t support hash indexes, you can emulate them yourself in a manner similar to that InnoDB uses.

Full-text Indexes

FULLTEXT is a special type of index that finds keywords in the text instead of comparing values directly to the values in the index. Full-text searching is completely different from other types of matching. It has many subtleties, such as stopwords, stemming and plurals, and Boolean searching. It is much more analogous to what a search engine does than to simple WHERE parameter matching.

Having a full-text index on a column does not eliminate the value of a B-Tree index on the same column. Full-text indexes are for MATCH AGAINST operations, not ordinary WHERE clause operations.

In standard MySQL, only part of storage engines supports full-text indexing, and there are third-party storage engines for full-text search, such as Sphinx, Lucene, Solr, Groonga, and so on.

Benefits of Indexes

  • Indexes reduce the amount of data the server has to examine.
  • Indexes help the server avoid sorting and temporary tables.
  • Indexes turn random I/O into sequential I/O.

Index Strategies

Creating the correct indexes and using them properly is essential to good query performance. There are many ways to choose and use indexes effectively, because there are many special-case optimizations and specialized behaviors. We need to understand how to use indexes effectively.

Isolating the column

There are queries that defeat indexes or prevent MySQL from using the available indexes. MySQL generally can’t use indexes on columns unless the columns are isolated in the query. Isolating the column means it should not be part of an expression or be inside a function in the query. Some examples of don’t isolating the column queries.

'bad query exapmles'
mysql> SELECT actor_id FROM sakila.actor WHERE actor_id + 1 = 5;
mysql> SELECT ... WHERE TO_DAYS(CURRENT_DATE) - TO_DAYS(date_col) <= 10;

Prefix Indexes and Index Selectivity

Sometimes you need to index very long character columns, which makes you indexes large and slow. One strategy is to simulate a hash index.

Another way is prefix indexes. You can often save space and get good performance by indexing the first few characters instead of the whole value. This makes your indexes use less space, but it also makes them less selective. Index selectivity is the ratio of the number of distinct indexed values to the total number of rows in the table, and range from 1/rows to 1. A highly selective index is good because it lets MySQL filter out more rows when it looks for matches.

A prefix of the column is often selective enough to give good performance. If you are indexing BLOB or TEXT columns, or very long VARCHAR columns, you must define prefix indexes, because MySQL disallows indexing their full length. To create a prefix index:

CREATE INDEX index_name
ON table_name(column_name(length));

Ways to calculate a good prefix length

  • Find the most frequent values and compare that list to a list of the most frequent prefixes.

  • Computing the full column’s selectivity and trying to make the prefix’s selectivity close to that value.

    'full column selectivity'
    mysql> SELECT COUNT(DISTINCT city)/COUNT(*) FROM sakila.city_demo;
    ''
    mysql> SELECT COUNT(DISTINCT LEFT(city, 3))/COUNT(*) AS sel3,
    -> COUNT(DISTINCT LEFT(city, 4))/COUNT(*) AS sel4,
    -> COUNT(DISTINCT LEFT(city, 5))/COUNT(*) AS sel5,
    -> COUNT(DISTINCT LEFT(city, 6))/COUNT(*) AS sel6
    -> FROM sakila.city_demo;

Prefix indexes can be a great way to make indexes smaller and faster, but they have downsides too: MySQL cannot use prefix indexes for ORDER BY or GROUP BY queries, nor can it use them as covering indexes.

A common case found to benefit from prefix indexes is when long hexadecimal identifiers are used. Adding an index on the first eight characters or so often boosts performance significantly, in a way that’s completely transparent to the application.

Multicolumn Indexes

Multicolumn indexes are often very poorly understood. Common mistakes are to index many or all of the columns separately, or to index columns in the wrong order.

Individual indexes on lots of columns won’t help MySQL improve performance for most queries. Earlier versions of MySQL could only a single index, so when no single index was good enough to help, MySQL often chose a table scan. MySQL 5.0 and newer can cope a little with such poorly indexed tables by using a strategy as index merge, which permits a query to make limited use of multiple indexes from a single table to locate desired rows.

For example, the film_actor table has an index on film_id and an index on actor_id, but neither is a good choice for both WHERE conditions in this query:

mysql> SELECT film_id, actor_id FROM sakila.film_actor
-> WHERE actor_id = 1 OR film_id = 1;

In older MySQL versions, that query would produce a table scan unless you wrote it as UNION of two queries:

mysql> SELECT film_id, actor_id FROM sakila.film_actor WHERE actor_id = 1
-> UNION ALL
-> SELECT film_id, actor_id FROM sakila.film_actor WHERE film_id = 1
-> AND actor_id <> 1;

In MySQL 5.0 and newer, the query can use both indexes, scanning them simultaneously and merging the result.

The index merge strategy sometimes works very well, but it’s more common for it to actually be an indication of a poorly indexed table.

When you see an index merge in EXPLAIN, you should examine the query and table structure to see if this is really the best you can get.

Choosing a Good Column Order

The correct order depends on the queries that will use the index, and you must think about how to choose the index order such that rows are sorted and grouped in a way that will benefit the query.

The order of columns in a multicolumn B-Tree index means that the index is sorted first by the leftmost column, the by the next column, and so on. Therefore, the index can be scanned in either forward or reverse order, to satisfy queries with ORDER BY, GOURP BY, and DISTINCT clauses that match the column order exactly.

There is a rule of thumb for choosing column order: place the most selective columns first in the index. It can be helpful in some cases, but it’s usually much less important than avoiding random I/O and sorting, all things considered. Specific cases vary, so there’s no one-size-fits-all rule. This rule of thumb is probably less important than you think.

Placing the most selective columns first can be a good idea when there is no sorting or grouping to consider, and thus the purpose of the index is only to optimize WHERE lookups. For example:

SELECT * FROM payment WHERE staff_id = 2 AND customer_id = 584;

Should you create an index on (staff_id, customer_id), or should you reverse the column order? We can run some quick queries to help examine the distribution of values in the table and determine which column has higher selectivity.

SELECT SUM(staff_id = 2), SUM(customer_id = 584) FROM payment
The Result:
SUM(staff_id = 2): 7992
SUM(customer_id = 584): 30

According to the rule of thumb, we should place customer_id first in the index, because the predicate matches fewer rows in the table (it has high selectivity).

If you don’t have specific samples to run, it might be better to use the rule of thumb, which is to look at the cardinality across the board, not just for one query:

SELECT COUNT(DISTINCT staff_id)/COUNT(*) AS staff_id_selectivity,
COUNT(DISTINCT customer_id)/COUNT(*) AS customer_id_selectivity,
COUNT(*)
FROM payment
THe Result:
staff_id_selectivity: 0.0001
customer_id_selectivity: 0.0373
COUNT(*): 16049

customer_id has higher selectivity, so again the answer is to put that column first in the index.

You have to be careful not to assume that average-case performance is representative of special-case performance. Special cases can wreck performance for the whole application.

Although the rule of thumb about selectivity and cardinality is important, other factors–such as sorting, grouping, and the presence of range conditions the query’s WHERE clause–can make a much bigger difference to query performance.

Clustered Indexes

Clustered indexes aren’t separate type of index, but make adjacent key values together. The exact detail vary between implementations, but InnoDB’s clustered indexes actually store a B-Tree index. The term “clustered” refers to the fact that rows with adjacent key values are stored close to each other.

You can have only one clustered index per table, because you can’t store the rows in two places at once. Storage engines are responsible for implementing indexes, but not all storage engines support clustered indexes.

InnoDB clusters the data by the primary key. If you don’t define a primary key, InnoDB will try to use a unique nonnullable index instead. If there is no such index, InnoDB will define a hidden primary key for you and then cluster on that. InnoDB clusters records together only within a page.

A clustering primary key can help performance, but it can also cause serious performance problems. Thus, you should think carefully about clustering, especially when you change a table’s storage engine from InnoDB to something else.

Advantages of clustering indexes

  • It keeps related data close together.
  • Data access is fast.

Disadvantages of clustered indexes

  • Insert speeds depend heavily on insertion order. Inserting rows in primary key order is the fastest way.
  • Updating the clustered index columns is expensive, because it forces InnoDB to move each updated row to a new location.
  • Secondary (nonclustered) indexes access require two index lookups instead of one. Because it stores the row’s primary key values rather than stores the pointer to the row’s physical location.

Covering Indexes

An index that contains (or covers) all the data needed to satisfy a query is called a covering index. In other words, a covering index is a multi-columns index that indexes on all columns needed for your queries. In MySQL, secondary indexes in your table default contain the primary key column values.

Benefits of covering indexes

  • Index entries are usually much smaller than the full row size, so MySQL can access significantly less data if it reads only the index.
  • Indexes are sorted by their index values, so I/O-bound range accesses will need to do less I/O compared to fetching each row from a random disk location.

A covering index can’t be any kind of index. The index must store the values from the columns it contains. Hash, spatial, and full-text indexes don’t store these values, so MySQL can use only B-Tree indexes to cover queries.

When you issue a query that is covered by an index, you will see “Using index” in the Extra column in EXPLAIN. For example, the inventory table has a multicolumn index on (store_id, film_id)

EXPLAIN SELECT store_iid, file_id FROM inventory;
Result:
...
Extra: Using index

Using Index Scans for Sorts

MySQL has two ways to produce ordered result: using a sort operation, or scanning an index in order. You can tell when MySQL plans to scan an index by looking for “index” in the type column in EXPLAIN.

Scanning the ordered index itself is fast, because it simply requires moving from one index entry to the next. However, if MySQL isn’t using the index to scanning, it will have to look up each row it finds in the index. Reading data in index order is usually much slower than a sequential table scan, especially for I/O-bound workloads.

MySQL can use the same index for both sorting and finding rows. It’s a good idea to design your indexes so that they’re useful for both tasks at once.

Ordering the result by the index works:

  • only when the index’s order is exactly the same as the ORDER BY clause and all columns are sorted in the same direction (ascending or descending).
  • If the query joins multiple tables, it works only when all columns in the ORDER BY clause refer to the first table.
  • The ORDER BY clause also needs to from a leftmost prefix of the index.
  • One case where the ORDER BY clause doesn’t have to specify a leftmost prefix of the index is if there are constants for the leading columns. If the WHERE clause or a JOIN clause specifies constants for these columns, they can “fill the gaps” in the index.

Packed (Prefix-Compressed) Indexes

Prefix compression reduce index size, allowing more of the index to fit in memory and dramatically improving performance in some cases.

Compressed blocks use less space, but they make some operations slower. For example, binary searches, ORDER BY.

The trade-off is one of CUP and memory resources versus disk resources. Packed indexes can be about one-tenth the size on disk, and if you have an I/O-bound workload they can more than offset the cost for some queries.

You can control how a table’s indexes are packed with the PACK_KEYS option to CREATE TABLE or ALTER TABLE. For example,

CREATE  TABLE `test_database`.`new_table` (
`id` INT NOT NULL ,
`address` VARCHAR(45) NULL ,
PRIMARY KEY (`id`) ,
INDEX `address` (`address` ASC) )
PACK_KEYS = 1;

This option PACK_KEYS can have following three values: 1, 0, DEFAULT. Set this option to 1 to pack indexes of all kind of columns. Set it to 0 will disable all kind of indexes compression. Set it to DEFAULT will pack only CHAR, VARCHAR, BINARY, or VARBINARY type columns.

Conclusion of using packed indexes

  • Packed indexes may give great space savings.
  • PACK_KEYS applies to MyISAM tables only.
  • Packed indexes can slow down your integer joins a lot.
  • Packed indexes will make you read faster and updates will become slower.

Redundant and Duplicate Indexes

MySQL allows you to create multiple indexes on the same column; it does not notice and protect you from your mistake. MySQL has to maintain each duplicate index separately, and the query optimizer will consider each of them when it optimizes queries.

Duplicate indexes are indexes of the same type, created on the same set of columns in the same order. You should try to avoid creating them, and you should remove them if you find them.

Sometimes you can create duplicate indexes without knowing it. For example:

CREATE TABLE table_name(
ID INT NOT NULL PRIMARY KEY,
...
UINQUE(ID),
INDEX(ID)
)

MySQL implements UNIQUE constraints and PRIMARY KEY constraints with indexes, so this example actually creates three indexes on the same column.

Redundant indexes are a bit different from duplicate indexes. If there is an index on (A, B), another index on (A) would be redundant because it is a prefix of the first index. The index on (A, B) can also be used as an index on (A) when they are B-Tree indexes.

Redundant indexes usually appear when people add indexes to a table. In most cases you don’t want redundant indexes, and to avoid them you should extend existing indexes rather than add new ones. Still, there are times when you will need redundant indexes for performance reasons. Extending an existing index might make it much larger and reduce performance for some queries.

The drawback of having two indexes is the maintenance cost. Inserting new rows into the table with more indexes is slower. Adding new indexes might have a performance impact for INSERT, UPDATE, DELETE operations, especially if a new index causes you to hit memory limits.

The solution for redundant and duplicate indexes is simply to drop them, but first you need to identify them. You can write various complicated queries against the INFORMATION_SCHEMA tables. You can also use tools to analyze them, such as common_schema, and pt-duplicate-keychecker tool.

Be careful when determining which indexes are candidates for dropping or extending. It’s good to validate your planned changes carefully with a tool such as pt-upgrade from Percona Toolkit.

Unused Indexes

In addition to duplicate and redundant indexes, you might have some indexes that the server simply doesn’t use. You should consider dropping them. There are two tools that can help you identify unused indexes. The easiest and most accurate is the INFORMATION_SCHEMA.INDEX_STATISTICS table in Percona Server and MariaDB. Alternatively, you can use the pt-index-usage tool included in Percona Toolkit.

Indexes and Locking

Indexes permit queries to lock fewer rows. If you queries never touch rows they don’t need, they’ll lock fewer rows, and that’s better for performance.

InnoDB locks rows only when it accesses them, and an index can reduce the number of rows InnoDB accesses and therefore locks. However, this works only if InnoDB can filter out the undesired rows at the storage engine level. For example,

SET AUTOCOMMIT=0;
BEGIN;
SELECT actor_id FROM actor WHERE actor_id < 5
AND actor_id <> 1 FOR UPDATE;
Result:
+----------+
| actor_id |
+----------+
| 2 |
| 3 |
| 4 |
+----------+

This query returns only rows 2 through 4, but it actually gets exclusive locks on rows 1 through 4. InnoDB locked row 1 because the plan MySQL chose for this query was an index range access:

EXPLAIN SELECT actor_id FROM actor WHERE actor_id < 5 
AND actor_id <> 1 FOR UPDATE;
Result:
+----+-------------+-------+-------+---------+--------------------------+
| id | select_type | table | type | key | Extra |
+----+-------------+-------+-------+---------+--------------------------+
| 1 | SIMPLE | actor | range | PRIMARY | Using where; Using index |
+----+-------------+-------+-------+---------+--------------------------+

In other words, the low-level storage engine operation was “begin at the start of the index and fetch all rows until actor_id < 5 is false”. The server didn’t tell InnoDB about the WHERE condition that eliminated row 1. Note the presence of “Using where” in the Extra column in EXPLAIN. This indicates that MySQL server is applying a WHERE filter after the storage engine returns the rows.

Index and Table Maintenance

Once you have created tables with proper data types and added indexes, you work isn’t over: you still need to maintain your tables and indexes to make sure they perform well. The three main goals of table maintenance are: finding and fixing corruption, maintaining accurate index statistics, and reducing fragmentation.

Finding and Repairing Table Corruption

The worst thing is table corruption. This often happens due to crashes. However, all storage engines can experience index corruption due to hardware problems or internal bugs in MySQL or the operating system.

Corrupted indexes can cause queries to return incorrect results, raise duplicate-key errors when there is no duplicate value, or even cause lockups and crashes. If you experience odd behavior you can run CHECK TABLE to see if the table is corrupt. CHECK TABLE usually catches most table and index errors. (Note that some storage engines don’t support this command.)

You can fix corrupt tables with the REPAIR TABLE command, but not all storage engines support this.

Alternatively, you can either use an offline engine-specific repair utility, such as myisamchk, or dump the data and reload it.

If you experience data corruption, the most important thing to do is try to determine why it’s occurring; don’t simply repair the data, or the corruption could return.

Updating Index Statistics

The MySQL query optimizer uses two API calls to ask the storage engines how index values are distributed when deciding how to use indexes. The first is the records_in_range() call, which accepts range end points and returns the number of records in that range.

The second API call is info(), which can return various types of data, including index cardinality (approximately how many records there are for each key value).

If the statistics were never generated, or if they are out of date, the optimizer can make bad decisions. The solution is to run ANALYZE TABLE, which regenerates the statistics.

Each storage engine implements index statistics differently, so the frequency with which you’ll need to run ANALYZE TABLE differs, as does the cost of running the statement:

  • The memory storage engine does not store index statistics at all.
  • MyISAM stores statistics on disk, and ANALYZE TABLE performs a full index scan to compute cardinality. The entire table is locked during this process.
  • InnoDB does not store statistics on disk as of MySQL 5.5, but rather estimates them with random index dives and stores them in memory.

You can examine the cardinality of your indexes with the SHOW INDEX FROM command.

InnoDB calculates statistics for indexes when tables are first opened, when you run ANALYZE TABLE, and when the table’s size changes significantly.

For even more query plan stability, and for faster system warmups, you can use a system table to store index statistics so they are stable across server restarts and don’t need to be recomputed when InnoDB opens the table for the first time after booting up. Index statistics persistence in MySQL 5.6 controlled by the innodb_analyze_is_persistent option.

If you configure your server not to update index statistics automatically, you need to do it manually with periodic ANALYZE TABLE commands, unless you know that the statistics won’t change in ways.

Reducing Index and Data Fragmentation

B-Tree indexes can become fragmented, which might reduce performance. Fragmented indexes can be poorly filled and non-sequential on disk.

The table’s data storage can also become fragmented. However, data storage fragmentation is more complex than index fragmentation. There are three types of data fragmentation:

  • Row fragmentation
  • Intra-row fragmentation
  • Free space fragmentation

MyISAM tables might suffer from all types of fragmentation, but InnoDB never fragments short rows; it moves them and rewrites them in a single piece.

To defragment data, you can either run OPTIMIZE TABLE or dump and reload data. These approaches work for most storage engines. For storage engines that don’t support OPTIMIZE TABLE, you can rebuild the table with a no-op ALTER TABLE. Just alter the table to have the same engine it currently uses:

ALTER TABLE <table> ENGINE=<engine>;

Don’t assume that you need to defragment your index and tables–measure them first to find out. Percona XtraBackup has a –stats option that can help for to measure fragmentation .

Conclusion

Indexing is a very complex topic. There are no fixed rules to guide how to indexing. It depends on many factors, such as what are your data structures, how did you use the data in your application, what results are you want, how did storage engines implement the index, properties of each type of index.

If you want to improve a slow query, you can profiling the query that lets you see which subtask costs the most time, or using the EXPLAIN command. Whatever you use what ways to analyze, but you always need to find the reasons for the cause slow to query. There are some methods that may be helpful to improve a query, such as update the table’s schema, add or extend indexes, and rewrite the query statement. Remember that indexing is part of improving a query. Note that Indexes are good for queries, but too many indexes may slow to write operations.

There are some basic principles to choose indexes:

  • General using of index is the B-Tree index, and the other types of indexes are rather more suitable for special purposes.
  • Reads data from index files are faster than reads from physical data files.
  • Single-row access is slow. You better to read in a block that contains lots of rows you need.
  • Accessing ranges of rows in order is fast. First, sequential I/O doesn’t require disk seek, so it is faster than random I/O. Secondly, it doesn’t need to perform any follow-up work to sort it.
  • Index-only access is fast. If an index contains all the columns that the query needs, the storage engine don’t need to find the other columns by looking up rows in the table. This avoids lots of single-row access.

General process for choosing indexes:

  1. Choosing a type of index
  2. Determining what properties of the index to use.
    • prefix indexes and compressed indexes
    • multicolumn indexes (like B-Tree index, hash index) and covering indexes (like B-Tree index), also consider indexes columns order)
  3. Avoid to add redundant and duplicate indexes, and remove unused indexes.

It’s very important to be able to reason through how indexes work, and to choose them based on that understanding, not on rules of thumb or heuristics such as “place the most selective columns first in multicolumn indexes” or “you should index all of the columns that appear in the WHERE clause”.

If you find a query that doesn’t benefit from all of these possible advantages of indexes, see if a better index can be created to improve performance. If not, perhaps a rewrite can transform the query so that it can use an index.

Appendixes

I. Indexes contrast Table

Index Type Indexed Columns Time Complexity Equality Query Range Query Order and Group Query Covering Index
B-Tree Index Multiple O(log n) Yes. (Leftmost indexed value) Yes Yes Yes
Hash Index Multiple O(1) Yes. (Entire indexed value) No No No
Bitmap Index One O(1) Yes No. Indexing on a column that has small range values. No No

References

[1] High Performance MySQL by Baron Schwartz, Vadim Tkachenko, Peter Zaitsev, Derek J. Balling

[2] Speed up your queries using the covering index in MySQL

This post is talking about MySQL schema optimization, if you are familiar with this topic, you can go to the conclusion part that’s a good summary of the schema optimization of MySQL that may give you some enlightenment.

Good logical and physical design is the cornerstone of high performance, and you must design your schema for the specific queries you will run. This often involves trade-offs. For example, a denormalized schema can speed up some types of queries but slow down others. Adding counter and summary tables is a great way to optimize queries, but they can be expensive to maintain.

Choosing Optimal Data Types

General principles for choosing better data types

  • Smaller is usually better. Try to use the smallest data type that can correctly store and represent your data. Smaller data types are usually faster, because they use less space on disk, in memory, and in the CPU cache. They also generally require fewer CPU cycles to process.
  • Simple is good. Fewer CPU cycles are typically required to process operations on simpler data types. For example, integers are cheaper to compare than characters. because character sets and collations (sorting rules) make character comparisons complicated.
  • Avoid NULL if possible. It’s usually best to specify columns as NOT NULL unless you intend to store NULL in them. It’s harder for MySQL to optimize queries that refer to nullable columns, because they make indexes, index statistics, and value comparisons more complicated.

Steps to choosing better data types

  1. Determining what general class of types is appropriate: numeric, string, temporal, and so on.
  2. Choosing the specific type. Many of MySQL’s data types can store the same kind of data but vary in the range of values they can store, the precision they permit, or the physical space (on disk and in memory) they require. Some data types also have special behaviors or properties. For example, DATETIME and TIMESTAMP can store the same kind of data: date and time, to a precision of one second. However, TIMESTAMP use only half as much storage space, and have time zone autoupdating capabilities.

MySQL supports many aliases for compatibility, such as INTEGER, BOOL, and NUMERIC. These are only aliases, don’t affect performance. You can use SHOW CREATE TABLE to see base types rather than aliases you used.

Numbers

There are two kinds of numbers: whole numbers and real numbers (numbers with a fractional part).

Whole Numbers

Integer types: TINYINT, SMALLINT, MEDIUMINT, INT, and BIGINT. These require 8, 16, 24, 32 and 64 bits of storage space, respectively. They can store values from -2^(N-1) to 2^(n-1)-1, where N is the number of bits of storage space they use.

Integer types can optionally have the UNSIGNED attribute, which disallows negative values and approximately doubles the upper limit of positive values. For example, a TINYINT UNSIGNED can store values ranging from 0 to 255 instead of from -128 to 127.

Your choice determines how MySQL stores the data, in memory and on disk. However, integer computations generally use 64-bit BIGINT integers, even on 32-bit architectures. (The exceptions are some aggregate functions, which use DECIMAL or DOUBLE to perform computations.)

MySQL lets you specify a width for integer types, such as INT(11). This meaningless for most applications: it does not restrict the legal range of values, but simply specifies the number of characters MySQL’s interactive tools (such as the command-line client).

Real Numbers

Real numbers are numbers that have a fractional part. However, they aren’t just for fractional numbers. You can also use DECIMAL to store integers that are so large they don’t fit in BIGINT.

The FLOAT and DOUBLE types support approximate calculations with standard floating point math. The DECIMAL type is for storing exact fractional numbers. it supports exact math.

Floating-point math is significantly faster than decimal math, because the CPU performs the computations natively.

Both floating-point and DECIMAL types let you specify a precision. For a DECIMAL column, you can specify the maximum allowed digits and the decimal point. For example, DECIMAL(5, 2) be able to store any value with five digits and two decimals, so it can store values from -999.99 to 999.99.

MySQL use DOUBLE for its internal calculations on floating-point types.

Note: Because of the additional space requirements and computational cost, you should use DECIMAL only when you need exact results for fractional numbers, for example, when storing financial data. You can also use BIGINT to store financial data, avoiding both the imprecision of floating-point storage and the cost of the precise DECIMAL math.

String Types

VARCHAR and CHAR types

The two major string types are VARCHAR and CHAR, which store character values.

VARCHAR stores variable-length character strings and is the most common string data type. It can require less storage space than fixed-length types, because it uses only as much space as it needs.

CHAR is fixed-length. Values are padded with spaces as needed for comparisons. CHAR is useful if you want to store very short string, or if all the values are nearly the same length. For example, CHAR is a good choice for MD5 values for user passwords. CHAR is also better than VARCHAR for data that’s changed frequently, because a fixed-length row is not prone to fragmentation. For very short columns, CHAR is also more efficient than VARCHAR, because VARCHAR needs an extra length byte.

BLOB and TEXT types

BLOB and TEXT are string data types designed to store large amounts of data as either binary or character strings, respectively.

Unlike with all other data types, MySQL handles each BLOB and TEXT value as an object with its own identity. Storage engines often store them specially. InnoDB may use a separate external storage area for them when they’re large.

The only difference between the BLOB and TEXT families is that BLOB types store binary data with no collation or character set, but TEXT types store string data and that have a character set and collation (for sort).

MySQL sorts BLOB and TEXT columns differently from other types: it sorts only the first max_sort_length bytes of such columns. if you need to sort by only the first few characters, you can either decrease the max_sort_length or use ORDER BY SUBSTRING(column, length).

MySQL can’t index the full length of these data types and can’t use the indexes for sorting.

Using ENUM instead of a string type

Sometimes you can use an ENUM column instead of conventional string types. An ENUM column can store a predefined set of distinct string values. MySQL stores them very compactly. It stores each value internally as an integer representing its position in the field definition list.

ENUM values sort by the internal integer values, not by the strings themselves. You can also use FIELD() to specify a sort order explicitly in your queries, but this prevents MySQL from using the index for sorting.

The biggest disadvantage of ENUM is that the list of strings is fixed, and adding or removing strings requires the use of ALTER TABLE. It might not be a good idea to use ENUM as a string data type when the list of allowed string values is likely to change arbitrarily in the future, unless it’s acceptable to add them at the end of the list, which can be done without a full rebuild of the table.

Another disadvantage of ENUM values as integers and have to do a lookup to convert it to its string representation, that have some overhead. In particular, it can be slower to join a CHAR or VARCHAR column to an ENUM column than to another CHAR or VARCHAR column.

Date and Time Types

MySQL has many types for various kinds of date and time values, such as YEAR and DATE. The finest granularity of time MySQL can store is one second. However, it can do temporal computations with microsecond granularity.

Most of date and time types have no alternatives, so there is no question of which one is the best choice. The only question is what to do when you need to store both date and time. MySQL offers two very similar data types for this purpose: DATETIME and TIMESTAMP. For many applications, either will work, but in some cases, one works better than the other.

DATETIME

DATETIME can stores values from the year 1001 to the year 9999, with a precision of one second. It stores the date and time packed into an integer in YYYYMMDDHHMMSS format, independent of time zone. This uses eight bytes of storage space.

It is not recommended to use a numeric column type (INT) to store datetime information. MySQL (and other systems too) provides many functions to handle datetime information. These functions are faster and more optimized than custom functions or calculations

TIMESTAMP

TIMESTAMP can stores the number of seconds elapsed since midnight, January 1, 1970, Greenwich Mean Time (GMT)–the same as a Unix timestamp. TIMESTAMP uses only four bytes of storage, so it has a much smaller range than DATETIME: from the year 1970 to the year 2038. MySQL provides the FROM_UNIXTIME() and UNIX_TIMESTAMP() functions to convert a Unix timestamp to a date, and vice versa.

The value a TIMESTAMP displays also depends on the time zone. The MySQL server, operating system, and client connections all have time zone settings.

TIMESTAMP also has special properties that DATETIME doesn’t have. TIMESTAMP columns are NOT NULL by default, which is difference from every other data type.

In general you should use TIMESTAMP to store date and time, because it is more space-efficient than DATETIME.

Subsecond

If you need to store a date and time value with subsecond. MySQL concurrently does not have an appropriate data type for this, but you can use your own storage format: you can use BIGINT data type and store the value as a timestamp in microseconds, or you can use a DOUBLE and store the fractional part of the second after the decimal point. Or you can use MariaDB instead of MySQL.

Bit-Packed Data Types

MySQL has a few data types that use individual bits within a value to store data compactly.

BIT

You can use a BIT column to store one or many true/false values in single column. BIT(1) defines a defines a field that contains a single bit, BIT(2) store 2 bits, and so on. The maximum length of a BIT column is 64 bits. For example, BIT(2) can store integer value 0~3, that represent two bits binary value 00, 01, 10, 11, respectively.

MySQL treats BIT as a string type, not a numeric type. When you retrieve a BIT(1) value, the result is a string but the contents are the binary value 0 or 1, not the ASCII value “0” or “1”.

This data type very confused, so you should use BIT with caution. For most applications, it is a better idea to avoid this type. If you want to store a true/false value in a single bit of storage space, another option is to create a nullable CHAR(0) column. This column is capable of storing either the absence of a value (NULL) or a zero-length value (the empty string).

Data Types Choice for Specific Usage

Choosing Identifiers

Choosing a good data type for an identifier column is very important. You are more likely to compare these columns to other values (for example, in joins) and to use them for lookups than other columns.

When choosing a type for an identifier column, you need to consider not only the storage type, but also how MySQL performs computations and comparisons on that type. For example, MySQL stores ENUM and SET types internally as integers but converts them to strings when doing comparisons in a string context.

You should use the same data types for identifier column in related tables. The types should match exactly, including properties such as UNSIGNED.

Choose the smallest size that can hold your required range of values.

Integer types are usually the best choice for identifiers, because they are fast and they work with AUTO_INCREMENT.

ENUM and SET types are generally a poor choice for identifiers.

String types. Avoid string types for identifiers if possible, because they take up a lot of space and are generally slower than integer types.

You should also be very careful with completely random strings, such as those produced by MD5(), SHA1(), or UUID(). Each new value you generate with them will be distributed in arbitrary ways over a large space, which can slow INSERT because of random location in indexes and some types of SELET queries because of logically adjacent rows will be widely dispersed on disk and in memory.

If you do store UUID values, you should remove the dashes or, even better, convert the UUID values to 16-byte numbers with UNHEX() and store them in a binary(16) column. You can retrieve the values in hexadecimal format with the HEX() function. Values generated by UUID() have different characteristics from those generated by a cryptographic hash function such as SHA1() : the UUID values are unevenly distributed and are somewhat sequential. They’re still not as good as a monotonically increasing integer.

Insertion operations

  • Autoincrement integer: O(1)
  • UUID: O(log n)

Query n rows by insertion order

  • Autoincrement integer: O(1 * log n)
  • UUID: O(n * log n)

Example of UUID store to and query by binary(16):

insert into sometable (your_id) values (UNHEX(REPLACE("3f06af63-a93c-11e4-9797-00505690773f", "-", "")))
select * from sometable where your_id = UNHEX("3f06af63a93c11e4979700505690773f");

IPv4 Address

You should use INT UNSIGNED data type ((unsigned 32-bit integers) ) to store IP address. MySQL provides the INET_ATON() and INET_NTOA() functions to convert between the two representations.

INSERT INTO yourtable (ip_addr) values (INET_ATON('94.120.175.182'));
SELECT INET_NTOA(ip_addr) from yourtable;

Schema Design Pitfalls in MySQL

There are also issues that arise from how MySQL is implemented, and that means you can make MySQL-specific mistakes, too. We need to avoid those mistakes and choose alternatives that work better with MySQL’s specific implementation.

Too many columns

MySQL’s storage engine API works by copying rows between the server and the storage engine in a row buffer format. The server then decodes the buffer into columns. But it can be costly to turn the row buffer into the row data structure with the decoded columns. If you are planning for hundreds of columns, be aware that the server’s performance characteristics will be a bit different.

Too many joins

Too many joins will be problematic in MySQL, because the cost of planning and optimizing the query is very expensive. MySQL has a limitation of 61 tables per join, and entity-attribute-value database require many self-joins. As a rough rule of thumb, it’s better to have a dozen or fewer tables per query if you need queries to execute very fast with high concurrency.

The all-powerful ENUM

Beware of overusing ENUM. For example,

CREATE TABLE ... (
country enum('', '1', '2',..., '31')

This would be a questionable design decision in any database with an enumerated value type, because it really should be an integer that is foreign-keyed to a “dictionary” or “lookup” table anyway. Before MySQL 5.1 you can’t add a new country to the list without an ALTER TABLE, which is a blocking operation, and even in 5.1 and newer if you add the value anywhere but at the end of the list.

The ENUM in disguise

An ENUM permits the column to hold one value from a set of defined values. A SET permits the column to hold one or more values from a set of defined values. Sometimes these can be easy to confuse. For example,

CREATE TABLE ...(
is_default set('Y', 'N') NOT NULL default 'N'

That almost surely ought to be an ENUM instead of a SET, assuming that it can’t be both true and false at the same time.

NULL not invented here

There are some benefit of avoiding NULL, and indeed you can consider alternatives when possible. Even when you do need to store a “no value” fact in a table, you might not need to use NULL. Perhaps you can use zero, a special value, or an empty string instead.

To store a not null value is better performance, but sometimes it will confuse its meaning, complicate your code, and introduce bugs. Therefore you can take this to extremes. Don’t be too afraid of using NULL when you need to represent an unknown value. In some cases, it’s better to use NULL than a magical constant. Handling NULL isn’t always easy, but it’s often better than alternative, for example

CREATE TABLE ...(
dt DATETIME NOT NULL DEFAULT '0000-00-00 00:00:00'

That bogus all-zeros value can cause lots of problems.

Normalization and Denormalization

There are usually many ways to represent any given data, ranging from fully normalized to fully denormalized and anything in between. In a normalized database, each fact is represented once and only once. Conversely, in a denormalized database, information is duplicated, or stored in multiple places.

Storing duplicate data may cause inconsistency of data and need more storage space. Normalization is a way to avoid this problems.

Pros and Cons of a Normalized Schema

Advantages

  • Normalized updates are usually faster than denormalized updates.
  • Normalization makes your tables have little or no duplicated data, so there’s less data to change.
  • Normalized tables are usually smaller, so they fit better in memory and perform better.
  • The lack of redundant data means there’s less need for DISTINCT or GROUP BY queries when retrieving lists of values.

Disadvantages

  • Any nontrivial query on a well-normalized schema will probably require at least one join, and perhaps several. This is not only expensive, but it can make some indexing strategies impossible.

Pros and Cons of a Denormalized Schema

A denormalized schema works well because everything is in the same table, which avoids joins.

Advantages

  • Avoids joins. If you don’t need to join tables, the worst case for most queries, even the ones that don’t use indexes is a full table scan. This can be much faster than a join when the data doesn’t fit in memory, because it avoids random I/O.
  • A single table can also allow more efficient indexing strategies.

A Mixture of Normalized and Denormalized

The truth is fully normalized and fully denormalized schemas are not fit the real world, you often need to mix the approaches, possibly using a partially normalized schema, cache tables, and other techniques.

Denomalization is more expensive to updates, because you have to change it in all tables. You must consider how frequently you’ll have to make such changes and how long they will take, compared to how often you’ll run the SELECT query.

Cache and Summary Tables

Sometimes the best way to improve performance is to keep redundant data in the same table as the data from which it was derived. However, sometimes you’ll need to build completely separate summary or cache tables, specially tuned for your retrieval needs. This approach works best if you can tolerate slightly stale data, but sometimes you really don’t have a choice. For instance, when you need to avoid complex and expensive real-time updates.

Cache tables mean tables that contain data that can be easily retrieved from the schema. Some column data of tables are logically redundant. Cache tables are useful for optimizing search and retrieval queries. For example, you might need many different index combinations to speed up various types of queries. These conflicting requirements sometimes demand that you create a cache table that contains only some of the columns from the main table.

Summary tables mean tables that hold aggregated data from GOURP BY queries. The data that is not logically redundant. They also use the term “roll-up table”. For example, it would be impossible to maintain an accurate real-time counter on a busy site. Instead, you could generate a summary table every hour. You can often do this with a single query, and it’s more efficient than maintaining counters in real time. The drawback is that the counts are not 100% accurate.

When using cache and summary tables, you have to decide whether to maintain their data in real time or with periodic rebuilds. Which is better will depend on your application, but a periodic rebuild not only can save resource but also can result in a more efficient table that’s not fragmented and has fully sorted indexes.

When you rebuild summary and cache tables, you’ll often need their data to remain available during the operation. You can achieve this by using a “shadow table”, which is a table you build behind the real table. When you’re done building it, you can swap the tables with an atomic rename. For example

DROP TABLE IF EXISTS my_summary_new, my_summary_old;
CREATE TABLE my_summary_new LIKE my_summary;
RENAME TABLE my_summary TO my_summary_old, my_summary_new TO my_summary;

Materialized Views

View are a logical virtual table created by “select query” but the result is not stored anywhere in the disk and every time we need to fire the query when we need data, so always we get updated or latest data from original tables.

Materialized views are also the logical view of our data-driven by the select query but the result of the query will get stored in the table or disk, also the definition of the query will also store in the database.

MySQL doesn’t support this natively. However, you can implement materialized view yourself, using Justin Swanhart’s open source Flexviews tools.

Counter Tables

An application that keeps counts in a table can run into concurrency problems when updating the counters. Such tables are very common in web applications. You can use them to cache the number friends a user has, the number of downloads of a file, and so on. It’s often a good idea to build a separate table for the counters, to keep it small and fast.

CREATE TABLE hit_counter (
cnt int unsigned not null
) ENGINE=InnoDB;
UPDATE hit_counter SET cnt = cnt + 1;
CREATE TABLE hit_counter (
slot tinyint unsigned not null primary key,
cnt int unsigned not null
) ENGINE=InnoDB;
UPDATE hit_counter SET cnt = cnt + 1 WHERE slot = RAND() * 100;
SELECT SUM(cnt) FROM hit_counter;

Speeding Up Alter Table

MySQL’s ALTER TABLE performance can become a problem with very large tables. MySQL performs most alterations by making an empty table with the desired new structure, inserting all the data from the old table into the new one, and deleting the old table. This can take a very long time, especially if you’re short on memory and the table is large and has lots of indexes.

In general, most ALTER TABLE operations will cause interruption of service in MySQL. For the general case, you need to use either operational tricks such as swapping servers around and performing the ALTER on servers that are not in production service, or a shadow copy approach. Tools can help with to build a new table for a shadow copy. For example, the “online schema change” tools from Facebook’s database operations team, Shlomi Noach’s openark toolkit, and Percona Toolkit.

Not all ALTER TABLE operations cause table rebuilds. For example, change or drop a column’s default value in two ways (one fast, and one slow).

ALTER TABLE your_table
MODIFY COLUMN your_column TINYINT(3) NOT NULL DEFAULT 5;

The following code is a fast way to modify column’s default value. The default value for the column is actually stored in the table’s .frm file, so you should be able to change it without touching the table itself. Any MODIFY COLUMN will cause a table rebuild, but you can use ALTER COLUMN to change column’s default value.

ALTER TABLE your_table
ALTER COLUMN your_column SET DEFAULT 5;

Conclusion

The general principles for choosing a data type are introduced earlier in this article. They are “smaller is usually better”, “simple is good”, and “avoid NULL if possible”.

  • Sometimes, the data type of a column doesn’t limit one kind of data type, so we can use the principle “simple is good” to determine what kind of data type we use.
  • When we determine what kind of data type for the column, we can follow the principle of “smaller is usually better” to choose a specific data type. Then, we also can follow the principle “avoid NULL if possible” to determine does the column allows NULL.

The kinds of data types are whole numbers, real numbers, strings (text), binary, temporal (time), and others. For choosing a specific data type from a kind of data type, you can use the following methods.

  • For choosing a data type of whole numbers. 1. First, you should choose the less length of integer type if possible. 2. If you don’t there are no negative numbers, you should set it to be unsigned (that makes values range larger). 3. For storing bool data, the better choice is BOOL(TINYINT) data type, others choices are BIT(1), CHAR(0). 4. There are many common usage scenarios of whole numbers. For example, table’s identifiers, counter numbers. For identifiers of tables, integers are the best choice for identifiers, and all tables’ identifiers should be the same integer type.
  • For choosing a data type of real numbers. 1. If you need exact calculations you should choose the DECIMAL type. 2. If you can accept approximate calculations you can choose the FLOAT and DOUBLE data type by different precisions. 3. Note that you should use DECIMAL only when you need exact results for fractional numbers because it needs more cost than floating numbers. 4. There are many common usage scenarios of real numbers, for example, financial data. For financial data, the DECIMAL type is a better choice for representing financial data, but if the financial data have small limit precision you can convert all data to integer values and use BIGINTEGER to store them.
  • For choosing a data type of temporal types. 1. Sometimes you can use integers to replace temporal types because integer types are simpler and faster than temporal types, but you will lose lots of very convenient built-in functions with temporal types, such as WEEK(), ADDDATE(). 2. In the choice of temporal types, the most common question is how to choose a data type that contains both date and time types from DATETIME and TIMESTAMP. The main differences between DATETIME and TIMESTAMP are representing range, space cost, and affect by time zone.
  • For choosing a specific string type. 1. The most choice are VARCHAR and CHAR. When your column values limit by small length, you can use CHAR to store it, otherwise use VARCHAR. 2. When you need to store large amounts of characters, you can choose a text type from text family types according to your range of characters of text. 3. For binary data, the basic choices are BINARY and VARBINARY that same with CHAR and VARCHAR, but store binary. 4. When you want to store large amounts of binary, you can choose a binary type from blob family types. 5. There are some special data types in string types, such as ENUM, SET, and JSON. 6. There are many common usage scenarios of string types, for example, ID card No., account number, name, URI, and so on. 7. Note that sometimes you can use integers to replace strings it has much performance improvement, such as IPv4 address can use BIGINTEGER to represent.

Comparison and Sorting

  • Same data types comparison is faster than different data types.
  • Numeric data types comparison is faster than character data types.
comparison and sorting
INT or DOUBLE Numeric value
BINARY Numeric value (bytes)
DATETIME or TIMESTAMP Numeric value (Integer value)
CHAR or VARCHAR Characters

Beware of Autogenerated Schemas

Autogenerate schemas can cause severe performance problems. Some programs use large VARCHAR fields for everything, or use difference data types for columns that will be compared in joins. Be sure to double-check a schema if it was created for you automatically.

Appendix

I. Basic Information of Data Types

data type Storage Values Range Note
BIT(M) 1~64 bit, approximately (M+7)/8 bytes (Fixed space cost specified by M) binary value (0~2^64-1) MySQL treats BIT as a string type, not a numeric type. This data type very confused, so you should use BIT with caution.
TINYINT(M) 1 byte Signed: -128~127, Unsigned: 0~255 M doesn’t limit length of number, it just for client display format.
BOOL, BOOLEAN (TINYINT(1)) 1 byte Signed: -128~127, Unsigned: 0~255
SMALLINT(M) 2 bytes Signed: -32768~32767, Unsigned: 0~65535
MEDIUMINT(M) 3 bytes Signed: -8388608~8388607, Unsigned: 0~16777215
INT(M), INTEGER(M) 4 bytes Signed: -2147483648~2147483647, Unsigned: 0~4294967295
BIGINT(M) 8 bytes Signed: -2^63~2^63-1, Unsigned: 0~2^64 -1
FLOAT(M, D) 4 bytes Approximately 7 decimal places. Unsigned means to disallow negative values
DOUBLE(M, D) 8 bytes Approximately 15 decimal places. Unsigned means to disallow negative values
DECIMAL(M, D), (DEC, NUMERIC, FIXED) Length + 1 or 2 bytes (Fixed space cost specified by M) M: 1~65, D: 0~30. Unsigned means to disallow negative values
YEAR 1 byte 1901 to 2155
DATE 3 bytes ‘1000-01-01’ to ‘9999-12-31’
TIME [fsp] 3 bytes, (Since MySQL 5.6.4, it’s 3 bytes + fractional seconds storage) ‘-838:59:59.000000’ to ‘838:59:59.000000’ fsp: fractional seconds precision, Range of fsp: 0~6
DATETIME [fsp] 8 bytes, (Since MySQL 5.6.4, it’s 5 bytes + fractional seconds storage) ‘1000-01-01 00:00:00.000000’ to ‘9999-12-31 23:59:59.999999’ fsp: fractional seconds precision, Range of fsp: 0~6
TIMESTAMP [fsp] 4 bytes, (Since MySQL 5.6.4, it’s 4 bytes + fractional seconds storage) ‘1970-01-01 00:00:01.000000’ UTC to ‘2038-01-19 03:14:07.999999’ UTC fsp: fractional seconds precision, Range of fsp: 0~6
CHAR(M) [charset] [collate] M: 0~255 characters, (Fixed space cost specified by M)
VARCHAR(M) [charset] [collate] M: 0~65535 characters (Variable space cost and max specified by M), have length prefix (1byte or 2bytes)
TINYTEXT [charset] [collate] 0~255(2^8-1) characters, (Variable space cost and max limit by 255 chars cost), 1 byte length prefix
TEXT(M) [charset] [collate] max 65535(2^16-1) characters (Variable space cost and max limit by 65535 chars cost), 2 byte length prefix M means smallest numbers of characters
MEDIUMTEXT [charset] [collate] max 16,777,215 (2^24 − 1) characters (Variable space cost) , 3 bytes length prefix
LONGTEXT [charset] [collate] max 4,294,967,295 or 4GB (2^32 − 1) characters (Variable space cost), 4 bytes length prefix
BINARY(M) M: 0~255 bytes, (Fixed space cost specified by M). (like CHAR)
VARBINARY(M) M: 0~65535 bytes (Variable space cost and max specified by M). (like VARCHAR)
TINYBLOB max 255(2^8-1) bytes (Variable space cost and max limit by 255 bytes), 1 byte length prefix
BOLB(M) max 65535 (2^16-1) bytes (Variable space cost and max limit by 65535 bytes), 2 bytes length prefix M means smallest numbers of bytes
MEDIUMBLOB max 16,777,215 (2^24 − 1) bytes (Variable space cost), 3 bytes length prefix
LONGBLOB max 4,294,967,295 or 4GB (2^32 − 1) bytes (Variable space cost), 4 bytes length prefix
ENUM(‘value1’, …) [charset] [collate] less than 3000 list of values (Variable space cost), internally as integers
SET(‘value1’, …) [charset] [collate] max 64 distinct members (Variable space cost), internally as integers
JSON roughly same as LONGTEXT, limit by max_allowed_packet system variable.

Fractional Seconds Precision Storage (Since MySQL 5.6.4)

Precision Storage Required
0 0 bytes
1, 2 1 bytes
3, 4 2 bytes
5, 6 3 bytes

References

[1] High Performance MySQL by Baron Schwartz, Vadim Tkachenko, Peter Zaitsev, Derek J. Balling

[2] MySQL 5.7 Reference Manual

Introduction

Computer users want to do more than one thing at a time. A single application does more than one things at a time called concurrent software.

The Java platform is designed to support concurrent programming. Since Java SE 5, the Java platform has also included high-level concurrency APIs.

Processes and Threads

In concurrent programming, there are two basic units of execution: process and threads. In the Java programming language, concurrent programming is mostly concerned with threads.

A computer system has many active processes and threads. If in computer systems that only have a single execution core, there is one thread actually executing at any given moment. Processing time for a single core is shared among processes and threads through an OS feature called time slicing.

It’s becoming more and more common for computer systems to have multiple processors or processors with multiple execution cores.

Processes

A process has a self-contained execution environment. A process generally has a complete, private set of basic run-time resources, for example, memory space.

Processes are often seen as synonymous with programs or applications. Most implementations of Java Virtual machines run as a single process. A Java application can create additional processes using ProcessBuilder object.

Threads

Threads are sometimes called lightweight processes. Both processes and threads provide an execution environment, but creating a new thread requires fewer resources than creating a new process.

Threads exist within a process, and every process has at least one thread. Threads share the process’s resources, including memory and open files. This makes for efficient, but potentially problematic, communication.

Difference between processes and threads

Threads exist within a process, and every process has at least one thread. Both processes and threads are units of execution, and they have an execution environment.

A process is a minimal resource assignment unit, but a thread is a minimal execution unit.

A process has a self-contained execution environment. No resources are shared between processes. But threads have a private resource from a process, and multiple threads in the same process also can share the resource of their process.

The Advantages and Disadvantages of Threads

Advantages

  • Improve the performance of applications.
  • Asynchronous workflows.
  • Exploiting multiple processors.
  • More responsive user interfaces.

Disadvantages

  • Safety.
  • Liveness.
  • Performance. Thread introduces additional performance costs, for example, context switches, and synchronization costs.

Properly using multi-thread is more beneficial than disadvantages.

Thread Objects

Each thread is associated with an instance of the class Thread. There are two basic strategies for using Thread objects to create a concurrent application.

  • To directly control thread creation and management, simply instantiate Thread each time.
  • To abstract thread management from the rest of your application, pass the application’s task to an executor.

Creating an instance of Thread must provide the code that will run in that thread. There are two ways to create a Thread instance:

  • Provide a runnable object.
  • Subclass Thread.

Using a class of implemented Runnable interface to create an instance of Thread is more general because of the Runnable object can subclass a class other than Thread.

Methods of Thread class:

  • Constructors
    • Thread(), Thread(String name)
    • Thread(Runnable target), Thread(Runnable target, String name)
    • Thread(ThreadGroup group, Runnable target, String name), Thread(ThreadGroup group, Runnable target, String name, long stackSize)
    • Thread(ThreadGroup group, String name)
  • Static Methods
    • static in activeCount()
    • static Thread currentThread()
    • static void dumpStack(), static Map<Thread, StackTraceElement[]> getAllStackTraces()
    • static int enumerate(Thread[] tarray)
    • static Thread.UncaughtExceptionHandler getDefaultUncaughtExceptionHandler()
    • static boolean holdsLock(Object obj)
    • static boolean interrupted()
    • static void yield()
  • Get Thread Information
    • long getId()
    • String getName()
    • int getPriority()
    • ClassLoader getConctextClassLoader()
    • StackTraceElement[] getStackTrace()
    • Thread.State getState()
    • ThreadGroup getThreadGroup()
    • Thread.UncaughtExceptionHandler getUncaughtExceptionHandler()
  • Check
    • void checkAccess()
    • boolean isAlive()
    • boolean isDaemon()
    • boolean isInterrupted()
  • Operating thread
    • void interrupt()
    • void join(), void join(long millis), join(long millis, int nanos)
    • void run()
    • void setName(String name), void setPriority(int newPriority), void setContextClassLoader(ClassLoader c), void setDaemon(boolean on)
    • void sleep(long millis), sleep(long millis, int nanos)
    • void start()

Why are Thread.stop, Thread.suspend and Thread.resume Deprecated?

Interrupts

An interrupt is an indication to a thread that it should stop what it is doing and do something else. Thread.interrupt() sets the interrupted status/flag of the target thread. The code running in that target thread MAY poll the interrupted status and handle it appropriately. Some methods that block such as Object.wait() may consume the interrupted status immediately and throw an appropriate exception (usually InterruptedException). Thread interruption is a gentle way to stop a thread. It is used to give threads a chance to exit cleanly, as opposed by deprecated Thread.stop() that force to stop the thread.

Interruption in Java is not preemptive. Threads have to cooperate in order to process the interrupt properly. If the target thread does not poll the interrupted status the interrupt is effectively ignored. Polling occurs via the Thread.interrupted() method which returns the current thread’s interrupted status and clears that interrupt flag. Usually, the thread might then do something such as throw InterruptedException. If a thread goes a long time without invoking a method that throws InterruptedException Then it must periodically invoke Thread.interrupted(), which returns if an interrupt has been received. For example:

while (...){
doSometing();
if (Thread.interrupted()){
// We've been interrupted
System.out.println("Thread interrupted!");
return; // or break loop
}
}
if (Thread.interrupted()){
throw new InterrupteException();
}

Some API methods have built-in interrupt handling

  • Object.wait()
  • Thread.sleep(), Thread.join()
  • Most java.util.concurrent structures.
  • Java NIO (it does not use InterruptedException, instead using ClosedByInterruptException).

Joins

The join method allows one thread to wait for the completion of another. Example of current thread wait for thread t to be complete:

t.join()

join is similar to sleep doesn’t release the locks it holds, it just suspends the current thread.

Difference Between Wait and Sleep

  • wait is for concurrent programming but sleep not. Sleeping does not release the locks it holds while wait releases the lock on wait() is called.
  • sleep just suspends a thread execution for a fixed time, but wait suspends a thread execution until notify is called.
  • wait must happen in a block synchronized on the monitor object (otherwise occurs IllegalMonitorStateException ) whereas sleep does not.
Object lock = ...;
synchronized (lock) {
lock.wait();
}
  • You can wait on object itself whereas you call sleep on Thread.

Synchronization

Threads communicate primarily by sharing access to fields and the objects reference fields refer to. This form of communication is extremely efficient, but makes two kinds of errors possible: thread interference and memory consistency errors. The tool needed to prevent these errors is synchronization.

However, synchronization can introduce threads deadlock and thread contention (starvation and livelock).

  • Thread Interference: Errors are introduced when multiple threads access shared data.
  • Memory Consistency Errors: Errors that result form inconsistent views of shared memory.
  • Synchronized Methods: It can effectively prevent thread interference and memory consistency errors.
  • Implicit Locks: synchronization is based on implicit locks.
  • Atomic Access: operations that can’t be interfered with by other threads.

Thread Interference

class Counter {
private int c = 0;
public void increment(){
c++;
}
public void decrement(){
c--;
}
public int value(){
return c;
}
}

The increment() and decrement() are not atomic operations. Each method has three steps operations: retrieve variable, update value, store result in variable. When multiple threads invoke these methods, they interfere with each other, because the three steps of each thread are interleaving executed. So the result of each thread is unpredictable.

Memory Consistency Errors

Memory consistency errors occur when different threads have inconsistent views of what should be the same data. When other thread update value of a variable, your current thread still read old value of the variable.

The key to avoiding memory consistency errors is understanding the happens-before relationship. This relationship is simply a guarantee that memory writes by one specific statement is visible to another specific statement.

There are several actions that create happens-before relationships:

  • Single thread rule: Each action in a single thread happens-before every action in that thread that comes later in the program order.
  • Monitor lock rule (synchronization): An unlock on a monitor lock (exiting synchronized method/block) happens-before every subsequent acquiring on the same monitor lock.
  • Volatile variable rule: A write to a volatile field happens-before every subsequent read of that same field.
  • Thread start rule: A call to Thread.start() on a thread happens-before every action in the started thread.
  • Thread join rule: All actions in a thread happen-before any other thread successfully returns from a join on that thread.
  • Transitivity: If A happens-before B, and B happens-before C, then A happens-before C.

Synchronized Methods

The Java programming language provides two basic synchronization idioms: synchronized methods and synchronized statements. synchronized make non-atomic operations become atomic operations and establish happens-before relationships between threads that access the same variables.

class SynchronizedCounter {
private int c = 0;
public synchronized void increment(){
c++;
}
public synchronized void decrement(){
c--;
}
public synchronized int value(){
return c;
}
}

Synchronized methods enable a simple strategy for preventing thread interference and memory errors. First, it is not possible for two invocations of synchronized methods on the same object to interleave. Second, when a synchronized method exists, it automatically establishes a happens-before relationship with any subsequent invocation of a synchronized method for the same object.

Intrinsic Locks and Synchronization

Synchronization is built around an internal entity known as the intrinsic lock or monitor lock. (The API specification often refers to this entity simply as a “monitor.”) Intrinsic locks play a role in both aspects of synchronization: enforcing exclusive access to an object’s state and establishing happens-before relationships.

Every object has an intrinsic lock associated with it.

Locks in Synchronized Methods

When a thread invokes a synchronized method, it automatically acquires the intrinsic lock for that method’s object and release it when the method returns. The lock release occurs even if the return was caused by an uncaught exception.

A static synchronized method is associated with a class. A thread acquires the intrinsic lock for the Class object associated with the class.

Synchronized Statements

public void add(String name){
synchronized(this){
lastName = name;
nameCount++;
}
nameList.add(name);
}

Invoking other objects’ methods from synchronized code can create liveness problems.

Synchronized statements are also useful for improving concurrency with fine-grained synchronization.

Reentrant Synchronization

Allowing a thread to acquire the same lock more than once enable reentrant synchronization.

Atomic Access

Common Atomic Actions

  • Reads and writes are atomic for reference variables and for most of primitive variables (except long and double).
  • Reads and writes are atomic for all variable declared volatile (including long and double variables)

Atomic actions cannot be interleaved, so they can be used without fear of thread interference. However, this does not eliminate all need to synchronize atomic actions, because memory consistency errors are still possible. Using volatile variables reduce the risk of memory consistency errors, because any write to a volatile variable establishes a happens-before relationship with subsequent reads of the same variable.

Using simple atomic variable access is more efficient than accessing these variables through synchronized code, but requires more care by the programmer to avoid memory consistency errors. Some of the classes in the java.util.concurrent package provide atomic methods that do not rely on synchronization.

Liveness

The most common kind of liveness problem is the deadlock, others are starvation and livelock.

Deadlock

Deadlock describes a situation where two or more threads are blocked forever, waiting for each other.

public class DeadlockExample{
private static Object lock1 = new Object();
private static Object lock2 = new Object();

public static void main(String[] args) {
new Thread(()->{
synchronized (lock1){
try {
// ensure both two threads acquired their first lock,
// and waiting for acquired the second lock.
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
synchronized (lock2){}
}
}).start();
new Thread(()->{
synchronized (lock2){
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
synchronized (lock1){}
}
}).start();
}
}

Starvation and Livelock

Starvation and livelock are much less common a problem than deadlock, but are still problems that every designer of concurrent software is likely to encounter.

Starvation describes a situation where a thread is unable to gain regular access to shared resources and is unable to make progress.

Livelock is a situation threads simply too busy responding to each other and unable to make further progress.

Guarded Blocks

Threads often have to coordinate their actions. The most common coordination idiom is the guarded block. Such a block begins by polling a condition that must be true before the block can proceed.

Guard by simply loop

public void guardedJoy(){
while(!joy) {}
System.out.println("Joy has been achived!");
}

Guard by invokes Object.wait. A more efficient guard.

public synchronized void guardedJoy(){
while(!joy){
try{
wait();
} catch (InterruptedException e){}
}
System.out.println("Joy efficiently has been achived!");
}
public synchronized notifyJoy() {
joy = true;
notifyAll();
}

Using guarded blocks can create a Producer-Consumer application.

Immutable Objects

An object is considered immutable if its state connot change after it is constructed.

Immutable objects are particularly useful in concurrent applications. Since they cannot change state, they cannot be corrupted by thread interference or observed in an inconsistent state.

Immutable objects using strategies

  • Don’t provide “setter” methods. Methods that modify fields or objects referred to by fields.
  • Make all fields final and private.
  • Don’t allow subclass to override methods. The simplest way to do this is to declare the class as final. A more sophisticated approach is to make the constructor private and construct instances in factory methods in this class.
  • If the instance fields include references to mutable objects, don’t allow those objects to be change. Don’t provide methods that modify the mutable objects. Don’t share references to the mutable objects.

High Level Concurrency Objects

These low-level APIs are adequate for very basic tasks, but higher-level building blocks are needed for more advanced tasks. This is especially useful for massively concurrent applications in multiprocessor and multi-core systems.

High Level Concurrency Features:

  • Lock object support locking idioms that simplify many concurrent applications.
  • Executors define a high-level API for launching and managing threads. Executor implementations provided by java.util.concurrent provide thread pool management suitable for large-scale applications.
  • Concurrent collections make it easier to manage large collections of data, and can greatly reduce the need for synchronization.
  • Atomic variables have features that minimize synchronization and help avoid memory consistency errors.
  • ThreadLocalRandom provides efficient generation of pseudorandom numbers from multiple threads.

Lock Objects

Synchronized code relies on a simple kind of reentrant lock. This kind of lock is easy to use, but many limitations. More sophisticated locking idioms are supported by the java.util.concurrent.locks package.

Lock objects work very much like the implicit locks used by synchronized code. The biggest advantage of Lock objects over implicit locks is their ability to back out of an attempt to acquire a lock. The tryLock method backs out if the lock is not available immediately or before a timeout expires.

We can use Lock objects to solve the deadlock problem. First we use Lock.tryLock() method to acquire all locks we needed, if fail to acquire, then unlock all acquired locks, else we can acquire all locks and without deadlock problem.

Executors

In large-scale applications, it makes sense to separate thread management and creation from the rest of the application. Objects that encapsulate these functions are known as executors.

  • Executor Interfaces: defines the three executor object types.
  • Thread Pools: are the most common kind of executor implementation.
  • Fork/Join: is a framework for taking advantage of multiple processors.
  • Executors: is a utility class that has factory and utility methods for Executor, ExecutorService, ScheduledExecutorService, ThreadFactory, and Callable classes.

Hierarchy of Executor

(I)Executor
|----(I)ExecutorService
|--------(I)ScheduledExecutorService
|--------(A)AbstractExecutorService
|------------ForkJoinPool
|------------ThreadPoolExecutor
|----------------ScheduledThreadPoolExecutor
Executors

Executor Interfaces

The java.util.concurrent package defines three executor interfaces: Executor, ExecutorService, ScheduledExecutorService. Typically, variables that refer to executor objects are declared as one of these three interface types, not with an executor class type.

The Executor interface provides a single method execute() designed to be a replacement for a common thread-creation idiom.

new Thread(r).start();
executor.execute(r);

The ExecutorService interface provides methods execute and submit. Like execute, the submit method accepts Runnable objects, but also accepts Callable object, which allow the task to return a value. The submit method returns a Future object.

The ScheduledExecutorService interface supplements the methods of its parent ExecutorService with schedule, which executes a Runnable or Callable task after a specified delay. In addition, the interface defines scheduleAtFixedRate and scheduleWithFixedDelay, which executes specified tasks repeatedly, at defined intervals.

Thread Pools

Most of executor implementations use thread pools. One common type of thread pool is the fixed thread pool.

One common type of thread pool is the fixed thread pool. A simple way to create an executor that uses a fixed thread pool is to invoke the newFixedThreadPool factory method of java.util.concurrent.Executors. This class also provides the following factory methods:

  • newCachedThreadPool
  • new SingleThreadExecutor
  • several factory methods returns ScheduledExecuteorService executors.

If none of the executors provided by the factory methods of Executors meet your needs, constructing instance of ThreadPoolExecutor or ScheduledThreadPoolExecutor will give you additional options.

To construct a ThreadPoolExecutor object, you least specify five arguments:

  • ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue)

Fork/Join

The fork/join framework is an implementation of ExecutorService interface that helps you take advantage of multiple processors. It is designed for work that can be broken into smaller pieces recursively.

The fork/join framework is distinct because it uses a work-stealing algorithm. Worker threads that run out of things to do can steal tasks from other threads that are still busy.

The center of the fork/join framework is the ForkJoinPool class, an extension of the AbstractExecutorService class. ForkJoinPool implements the core work-stealing algorithm and can execute ForkJoinTask processes.

You submit tasks to a ForkJoinPool similarly to how you submit tasks to an ExecutorService. You can submit two types of tasks. One is RecursiveAction that does not return any result. Another is RecursiveTask can return a result. They are a subclass of ForkJoinTask, and all of the tasks classes are abstract.

There are some generally useful features in JavaSE which are already implemented using the fork/join framework.

  • Arrays class methods: parallelSort()
  • java.util.stream package

fork/join framework pseudocode:

if (my portion of the work is small enough)
do the work directly
else
split my work into two pieces

Example of ForkJoinPool

public class Main {
public static void main(String[] args) {
ForkJoinPool forkJoinPool = new ForkJoinPool(5);
forkJoinPool.invoke(new MyRecursiveAction(100));
}
}
public class MyRecursiveAction extends RecursiveAction {
private long workload = 0;
private static final long THRESHOLD = 16;

public MyRecursiveAction(long workload) {
this.workload = workload;
}

@Override
protected void compute() {
if (this.workload <= THRESHOLD) {
System.out.println("Doing workload myself : " + this.workload);
} else {
System.out.println("Splitting workload : " + this.workload);
List<MyRecursiveAction> subtasks = new ArrayList<>();
subtasks.addAll(createSubtasks());
for (RecursiveAction subtask : subtasks) {
subtask.fork();
}
}
}

private List<MyRecursiveAction> createSubtasks() {
List<MyRecursiveAction> subtasks = new ArrayList<>();
MyRecursiveAction subtask1 = new MyRecursiveAction(this.workload / 2);
MyRecursiveAction subtask2 = new MyRecursiveAction(this.workload / 2);
subtasks.add(subtask1);
subtasks.add(subtask2);
return subtasks;
}
}

Concurrent Collections

Concurrent collections are high concurrency and thread-safe collection implementation. For example, CopyOnWriteArrayList, ConcurrentHashMap.

Atomic Variables

The java.util.concurrent.atomic package defines classes that support atomic operations on single variable. All classes have get and set methods that work like reads and writes on volatile variables. A set has a happens-before relationship with any subsequent get on the same variable. The atomic compareAndSet method also has these memory consistency feature, as do the simple atomic arithmetic methods that apply to integer atomic variables.

Example of Basic Usage of AtomicInteger

AtomicInteger atomicInteger = new AtomicInteger(0);
atomicInteger.accumulateAndGet(10, (x, y)->{return x + y;});
System.out.println(atomicInteger.get());

Example of Using Atomic Variable instead of Synchronized Methods

public class AtomicCounter {
private AtomicInteger c = new AtomicInteger(0);

public void increment() {
c.incrementAndGet();
}

public void decrement() {
c.decrementAndGet();
}

public int value() {
return c.get();
}
}

Concurrent Random Numbers

The package java.util.concurrent includes a convenience class, ThreadLocalRandom, for applications that expect to use random numbers from multiple threads or ForkJoinTasks.

For concurrent access, using ThreadLocalRandom instead of Math.random() results in less contention and better performance.

Example of ThreadLocalRandom

int r = ThreadLocalRandom.current().nextInt(1, 10);

Conclusion

In multi-threads programming, allows multiple threads simultaneously access shared resources that may cause unpredicted results or corrupted values. For thread-safe, we need to consider two factors: thread interference and memory consistency errors.

Thread interference can be solved by atomic access (common using exclusive locks). Memory consistency errors can be solved by establishing a happens-before relationship (that between reads and writes the same variable).

We can simply use synchronized to solve the two thread-safe problems. But for high concurrency and thread-safe, we should use as few locks as possible, even have no locks. So we consider using a combination of explicit locks (reentrant locks), immutable objects, volatile variables, and atomic variables to solve the two thread-safe problems.

Locks may cause liveness problems: deadlock, starvation, and livelock. we can follow some coding principles to avoid these problems happens.

Guarded blocks common use for in threads cooperation. The most common example is the Producer-Consumer application.

Executors are for efficient thread creation and management.

Concurrent collections are high concurrency and thread-safe collection implementation.

References

[1] The Java Tutorial: A Short Course on the Basics

[2] Difference between wait() and sleep()

[3] What does java.lang.Thread.interrupt() do?

[4] Java - Understanding Happens-before relationship

[5] Memory Consistency - happens-before relationship in Java

This post will discuss content in Java container. It’s based Java SE 8.

Collection

Collection interface

The root interface in collection hierarchy. A collection represents a group of objects, known as its elements. Some collections allow duplicate elements and others do not. some are ordered and others unordered. The JDK does not provide any direct implementations of this interface. This interface is typically used to pass collections around and manipulate them where maximum generality is desired.

Methods of Collection

Basic Operations

  • boolean add(E e), boolean addAll(Collection<? extends E> c)
  • boolean remove(Object o), boolean removeAll(Collection<?> c)
  • void clear()
  • boolean contains(Object o), boolean contains(Collection<?> c)
  • int size()
  • boolean equals(Object o)
  • boolean isEmpty()
  • Iterator<E> iterator()
  • object[] toArray(), <T> T[] toArray(T[] a)

Advanced Operations

  • default Stream<E> stream()
  • default Stream<E> parallelStream()
  • default boolean removeIf(Predicate<? super E> filter)
  • boolean retainAll(Collection<?> c)
  • default Spliterator<E> spliterator()

AbstractCollection

This class provides a skeletal implementation of the Collection interface, to minimize the effort required to implement Collection interface.

Most of methods implementation of AbstractCollection depend on its abstract methods iterator() and size(). These two methods are the most important and useful for collection classes.

Non-abstract methods in this class may be overridden if its subclass implemented admits a more efficient implementation.

List

Hierarchy of List:

(I)List
|----(A)AbstractList
|--------ArrayList
|--------Vector
|--------(A)AbstractSequentialList
|------------LinkedList
|----java.util.concurrent.CopyOnWriteArrayList

List interface

This interface control over where to insert. The user can access elements by their integer index, and search for elements in the list.

The List interface provides a special iterator, called a ListIterator, that allows element insertion add(E e) and replacement set(E e), and bidirectional access previous() in addition to the normal operations that the Iterator interface provides.

Additional methods of List

  • E get(int index)
  • int indexOf(Object o)
  • int lastIndexOf(Object o)
  • ListIterator listIterator()
  • void replaceAll(UnaryOperator operator)
  • E set(int index, E element)
  • void sort(Comparator c)
  • List<E> sublist(int fromIndex, int toIndex)

AbstractList

This class provides a skeletal implementation of the List interface to minimize the effort required to implement this interface backed by a “random access” data store (such as an array).

AbstractSequentialList

This class provides a skeletal implementation of the List interface to minimize the effort required to implement this interface backed by a “sequential access” data store (such as a linked list).

ArrayList & Vector

Resizable-array implementation of the List interface. Implements all optional list operations, and permits all elements, including null. In addition to implementing the List interface, this class provides methods to manipulate the size of the array that is used internally to store the list.

ArrayList class is roughly equivalent to Vector, except that it is unsynchronized.

Each ArrayList instance has a capacity. The capacity is the size of the array used to store the elements in the list. It is always at least as large as the list size. As elements are added to an ArrayList, its capacity grows automatically. The details of the growth policy are not specified beyond the fact that adding an element has constant amortized time cost.

An application can increase the capacity of an ArrayList instance before adding a large number of elements using the ensureCapacity operation. This may reduce the amount of incremental reallocation.

Note that this implementation is not synchronized. If multiple threads access an ArrayList instance concurrently, and at least one of the threads modifies the list structurally, it must be synchronized externally. If the list used in multithread, the list should be “wrapped” using the Collections.synchronizedList method. This is best done at creation time, to prevent accidental unsynchronized access to the list.

The iterators returned by this class’s iterator and listIterator methods are fail-fast: if the list is structurally modified at any time after the iterator is created, in any way except through the iterator’s own remove or add methods, the iterator will throw a ConcurrentModificationException. Thus, in the face of concurrent modification, the iterator fails quickly and cleanly, rather than risking arbitrary, non-deterministic behavior at an undetermined time in the future.

LinkedList

Doubly-linked list implementation of the List and Deque interfaces.

All of the operations perform as could be expected for a doubly-linked list. Operations that index into the list will traverse the list from the beginning or the end, whichever is closer to the specified index.

Note that this implementation is not synchronized. If multiple threads access a linked list concurrently, and at least one of the threads modifies the list structurally, it must be synchronized externally. If the list used in multithread, the list should be “wrapped” using the Collections.synchronizedList method. This is best done at creation time, to prevent accidental unsynchronized access to the list.

The iterators returned by this class’s iterator and listIterator methods are fail-fast. Same with ArrayList.

CopyOnWriteArrayList

What is CopyOnWriteArrayList

java.util.concurrent.CopyOnWriteArrayList a thread-safe variant of ArrayList in which all mutative operation (add, set, and so on) are implemented by making a fresh copy of the underlying array.

This is ordinarily too costly, but may be more efficient than alternatives when traversal operations vastly outnumber mutations, and is useful when you cannot or don’t want to synchronize traversals, yet need to preclude interference among concurrent threads. The “snapshot” style iterator method uses a reference to the state of the array at the point that the iterator was created. This array never changes during the lifetime of the iterator, so interference is impossible and the iterator is guaranteed not to throw ConcurrentModificationException. The iterator will not reflect additions, removals, or changes to the list since the iterator was created. Element-changing operations on iterators themselves (remove, set, and add) are not supported. These methods throw UnsupportedOperationException.

Why is the CopyOnWriteArrayList Thread-safe

All write operations have a lock. The write operations are serializable.

All mutative operations operate on new copy list and then update volatile list reference. Array update is an atomic operation and hence reads will always see the array in a consistent state.

Applicability

Updating this collection a lot will kill performance. You should only use this read when you are doing upwards of 90+% reads.

Consequences

The all read and write operations of CopyOnWriteArrayList are thread-safe. Some of methods have a ReentrantLock and copy list, some have not.

It highly improves concurrency of read operations in thread-safe condition, but write operations is very costly.

Questions

Q: Why all mutative operations need to copy list? Why read and traversal operation not need locks?

A: The key to this idea is that writes are accomplished by copying the existing value, modifying it, and replacing the reference. It also follows that once set the object pointed by the reference is always read only (i.e. no mutation is done directly on the object referred by the reference). Therefore, readers can access it safely without synchronization.

Queue

Hierarchy of Queue:

(I)java.util.Queue
|----(A)java.util.AbstractQueue
|--------java.util.PriorityQueue
|----(I)java.util.Deque
|--------java.util.ArrayDeque
|----(I)java.util.concurent.BlockingQueue
|--------(I)BlockingDeque
|------------LinkedBlockingDeque
|--------(I)TransferQueue
|------------LinkedTransferQueue
|--------ArrayBlockingQueue
|--------LinkedBlockingQueue
|--------PriorityBlockingQueue
|--------DelayQueue
|--------SynchronousQueue
|----java.util.concurrent.ConcurrentLinkedQueue
|----java.util.concurrent.ConcurrentLinkedDeque

Queue interface

A group of elements in First in first out manner.

The Queue interface does not define the blocking queue methods, which are common in concurrent programming.

Queue implementations generally do not allow insertion of null elements, although some implementations do not prohibit. Null should not be inserted into a queue, as null is also used as a special return value by the poll method to indicate that the queue contains no elements.

Methods of Queue

  • insert
    • boolean add(E e) - Inserts element into this queue.
    • boolean offer(E e) - It is similar with add().
  • remove
    • E poll() - Retrieves and remove the head of this queue.
    • E reomve() - It is similar with poll()
  • examine
    • E element() - It is similar with peek().
    • E peek() - Retrieves, but does not remove, the head of this queue.

Deque interface

A group of elements supports element insertion and removal at both head and tail. Deque can also be used as LIFO (last-in-First-out) stacks.

The name deque is short for “double ended queue” and is usually pronounced “deck”.

Methods of Deque

  • insert
    • boolean add(E e), void addFirst(E e), void addLast(E e)
    • boolean offer(E e), boolean offerFirst(E e), boolean offerLast(E e)
    • void push(E e)
  • remove
    • E remove(), E removeFirst(), E removeLast()
    • boolean removeFirstOccurrence(Object o), boolean removeLastOccurrence(Object o)
    • E poll(), E pollFirst(), E pollLast()
    • E pop()
  • examine
    • E getFirst(), E getLast()
    • E peek(), E peekFirst(), E peekLast()
    • E element()
  • boolean contains(Object o)
  • Iterator descendingIterator()
  • Iterator iterator()
  • int size()

BlockingQueue interface

java.util.concurrent.BlockingQueue is a queue additionally supports operations that wait for the queue to become non-empty when retrieve an element, and wait for space to become available in the queue when storing an element.

BlockingQueue implementations are designed to be used primarily for producer-consumer queues. A BlockingQueue can safely be used with multiple producers and multiple consumers.

BlockingQueue implementations are thread-safe. All queuing methods achieve their effects atomically using internal locks or other forms of concurrency control. However, the bulk Collection operations addAll, containsAll, retainAll and removeAll are not necessarily performed atomically. So it is possible, for example, for addAll(c) to fail after adding only some of the elements in c.

Methods of BlockingQueue

  • insert
    • boolean add(E e)
    • boolean offer(E e)
    • void put(E e) - blocks
    • boolean offer(E e, long time, TimeUnit unit) - blocks and times out
  • remove
    • boolean remove(Object o)
    • E poll()
    • E take() - blocks
    • E poll(long time, TimeUnit unit) - blocks and times out
  • examine
    • E element()
    • E peek()
  • boolean contains(Object o)
  • drainTO(Collection c)
  • int remainingCapacity()

Usage example of producer-consumer

class Producer implements Runnable {
private final BlockingQueue queue;
Producer(BlockingQueue q) { queue = q; }
public void run() {
try {
while (true) { queue.put(produce()); }
} catch (InterruptedException ex) { ... handle ...}
}
Object produce() { ... }
}

class Consumer implements Runnable {
private final BlockingQueue queue;
Consumer(BlockingQueue q) { queue = q; }
public void run() {
try {
while (true) { consume(queue.take()); }
} catch (InterruptedException ex) { ... handle ...}
}
void consume(Object x) { ... }
}

class Setup {
void main() {
BlockingQueue q = new SomeQueueImplementation();
Producer p = new Producer(q);
Consumer c1 = new Consumer(q);
Consumer c2 = new Consumer(q);
new Thread(p).start();
new Thread(c1).start();
new Thread(c2).start();
}
}

BlockingDeque interface

It is the combination of BlockingQueue and Deque interface. You can use this interface for both blocking queues and blocking stacks.

TransferQueue interface

The TransferQueue interface is a refinement of the BlockingQueue interface in which producers can wait for consumers to receive elements. The BlockingQueue can only put element into queue (and block if queue is full). However, with TransferQueue, you can also block producer (whether the queue full or not) until other consumer threads receive your element.

Methods of TransferQueue

  • int getWaitingConsumerCount() - Returns an estimate of the number of consumers waiting to receive elements via take() or timed poll.
  • boolean hasWaitingConsumer() - Returns true if there is at least one consumer waiting to receive an element via take() or timed poll.
  • void transfer(E e) - Transfers the element to a consumer, waiting if necessary to do so.
  • boolean tryTransfer(E e) - Transfers the element to a waiting consumer immediately, if possible.
  • boolean tryTransfer(E e, long timeout, TimeUnit unit) - Transfers the element to a consumer if it is possible to do so before the timeout elapses.

Usage Example of TransferQueue

public class TransferQueueExample {

TransferQueue<String> queue = new LinkedTransferQueue<String>();

class Producer implements Runnable{
@Override
public void run() {
for(int i = 0; i < 2; i++){
try{
System.out.println("Producer waiting to transfer: " + i);
queue.transfer("" + i);
System.out.println("Producer transfered: " + i);
}catch(Exception e){
e.printStackTrace();
}
}
}

}

class Consumer implements Runnable{
@Override
public void run() {
for(int i = 0; i < 2; i++){
try{
Thread.sleep(2000);
System.out.println("Consumer waiting to comsume: " + i);
queue.take();
System.out.println("Consumer consumed: " + i);
}catch(Exception e){
e.printStackTrace();
}
}
}
}

public static void main(String args[]){
TransferQueueExample example = new TransferQueueExample();
new Thread(example.new Producer()).start();
new Thread(example.new Consumer()).start();
}
}

The output is:

Producer waiting to transfer: 0
Consumer waiting to comsume: 0
Consumer consumed: 0
Producer transfered: 0
Producer waiting to transfer: 1
Consumer waiting to comsume: 1
Consumer consumed: 1
Producer transfered: 1

Producer thread is blocked before consumer thread take product away.

AbstractQueue

This class provides skeletal implementations of some Queue operations. Its implemented methods actually implementing by calling abstract methods offer(), poll(), peek().

PriorityQueue

An unbounded priority queue based on a priority heap. The elements of the priority queue are ordered according to their natural Order (ascending order. sort by e1.compareTo(e2)), or by a Comparator provided at queue construction time, depending on which constructor is used. A priority queue does not permit null elements. A priority queue relying on natural Order also does not permit insertion of non-comparable objects.

Its implementation provides O(log(n)) time for the enqueuing and dequeuing methods (offer, poll, remove() and add); linear time for the remove(Object) and contains(Object) methods; and constant time for the retrieval methods (peek, element, and size).

Concurrent Queues

DelayQueue

An unbounded blocking queue of Delayed elements, in which an element can only be taken when its delay has expired. If no delay has expired there is no head and poll will return null.

DelayQueue<E extends Delay> is a PriorityQueue order by Delayed.

Methods of Delayed interface

  • long getDelay(TimeUnit unit) - Returns the remaining delay associated with this object, in the given time unit.
  • int compareTo(Delayed obj)

Methods of DelayQueue similar with BlockingQueue.

Usage Example

class MyDelayed implements Delayed {
private String name;
private long time;

public MyDelayed(String name, long delayMillisecond) {
this.name = name;
this.time = System.currentTimeMillis() + delayMillisecond;
}

@Override
public long getDelay(TimeUnit unit) {
return unit.convert(time - System.currentTimeMillis(), TimeUnit.MILLISECONDS);
}

@Override
public int compareTo(Delayed o) {
return (int) (this.getDelay(TimeUnit.MILLISECONDS) - o.getDelay(TimeUnit.MILLISECONDS));
}

@Override
public String toString() {
return "MyDelayed{" + "name='" + name + '\'' + ", time=" + time + '}';
}
}

public class Test{
public static void main(String[] args) throws InterruptedException {
DelayQueue<MyDelayed> delayQueue = new DelayQueue();
// offer
delayQueue.offer(new MyDelayed("B", 3000L));
delayQueue.offer(new MyDelayed("A", 2L));
// poll
// waiting for the delay is expired to poll
Thread.sleep(2L);
System.out.println("before poll");
// Output: MyDelayed{name='A', time=1587625828209}
System.out.println(delayQueue.poll());
// Output is null, because the delay is not expired
System.out.println(delayQueue.poll());
}
}

SynchronousQueue

A blocking queue in which each insert operation must wait for a corresponding remove operation by another thread, and vice verse. A synchronous queue does not have any internal capacity.

The insert and remove operations of SynchronousQueue may block its thread. The insert and remove operations must occur at the same time, and the fast thread will block to wait for another operation thread.

public static void main(String[] args) throws InterruptedException {
SynchronousQueue<Integer> synchronousQueue = new SynchronousQueue<>();
new Thread(()->{
try {
Thread.sleep(1000L);
System.out.println("the put to wait for remove...");
synchronousQueue.put(1);
System.out.println("end to put.");
} catch (InterruptedException e) {
e.printStackTrace();
}
}).start();
new Thread(()->{
try {
System.out.println("the remove to wait for put...");
System.out.println(synchronousQueue.take());
System.out.println("end to remove.");
} catch (InterruptedException e) {
e.printStackTrace();
}
}).start();
}

Output

the remove to wait for put...
the put to wait for remove...
end put.
1
end remove.

ConcurrentLinkedQueue

An unbound thread-safe queue based on linked nodes. This queue orders elements FIFO (first-in-first-out). A ConcurrentLinkedQueue is an appropriate choice when many threads will share access to a common collection.

Stack

Hierarchy of Stack:

java.util.Stack
java.util.Deque
java.util.LinkedList
java.util.ArrayList

java.util.Stack class represents a last-in-first-out sequence. It extends class Vector. It’s thread-safe implemented by synchronized methods.

If you don’t operate in a multithread environment, you should use Deque as Stack that is fast than Stack.

Methods of Stack

  • E peek()
  • E pop()
  • E push(E item)
  • boolean empty()
  • int search(Object o)

Set

Hierarchy of Set:

(I)Set
|----(A)AbstractSet
|--------HashSet
|------------LinkedHashSet
|--------(A)EnumSet
|--------java.util.concurrent.CopyOnWriteArraySet
|----(I)SortedSet
|--------(I)NavigableSet
|------------TreeSet
|------------java.util.concurrent.ConcurrentSkipListSet

Set Interface

A collection that contains no duplicate elements, and at most one null element.

Methods of Set interface

  • boolean add(E e), addAll(Collection c)
  • void clear()
  • boolean contains(Object o), containsALl(Collection c)
  • boolean equals(Object o)
  • int hashCode()
  • boolean isEmpty()
  • Iterator iterator()
  • boolean remove(Object o), removeAll(Collection c)
  • boolean retainAll(Collection c)
  • int size()
  • default Spliterator spliterator()
  • Object[] toArray(), T[] toArray(T[] a)

The Set interface does not have any get or find methods to find an element from the Set container. The only way to find the specified element is by using its iterator to traverse the Set.

AbstractSet Class

This class provides a skeletal implementation of the Set interface to minimize the effort required to implement this interface.

Implemented methods of Set interface:

  • equals(Object o)
  • hashCode()
  • removeAll(Collection c)

HashSet Class

This class implements the Set interface, backed by a hash table (actually a HashMap instance). It makes no guarantees as to the iteration order of the set.

This class offers constant time performance for the basic operation (add, remove, contains and size), assuming the hash function disperses the elements properly among the buckets.

This implementation is not synchronized.

This class’s iterator is fail-fast.

LinkedHashSet

Hash table and linked list implementation of Set interface, with predictable iteration order. This implementation differs from HashSet in that it maintains a doubly-linked list running through all of its entries. This linked list defines the iteration Order, which is insertion-order.

EnumSet Abstract Class

A specialized Set implementation for use with enum types. All of the elements in an enum set must come from a single enum type. Enum sets are represented internally as bit vectors. This representation is extremely compact and efficient.

SortedSet Interface

SortedSet is a Set that provides a total Order on its elements. The elements are ordered using their natural Order, or by a Comparator typically provided at sorted set creation time. The set’s iterator will traverse the set in ascending element order.

NavigableSet is a SortedSet textended with navigation methods reporting closest matches for given search targets. Methods lower, floor, ceiling, and higher return elements respectively less than, less than or equal, greater than or equal, and greater than a given element, returning null if there is no such element. A NavigableSet may be accessed and traversed in either ascending or descending order.

Methods of NavigableSet interface

  • Navigation
    • E ceiling(E e)
    • E floor(E e)
    • E higher(E e)
    • E lower(E e)
    • E pollFirst()
    • E pollLast()
  • Iterator
    • Iterator descendingIterator()
    • Iterator iterator()
  • Subset
    • SortedSet headSet(E toElement), NavigableSet headSet(E toElement, boolean inclusive)
    • SortedSet tailSet(E fromElement), NavigableSet tailSet(E fromElement, boolean inclusive)
    • SortedSet subSet(E fromElement, E toElement), NavigableSet subSet(E fromElement, boolean fromInclusive, E toElement, boolean toInclusive)
  • Reverse
    • NavigableSet descendingSet()

TreeSet Class

TreeSet is a NavigableSet implementation based on TreeMap. The elements are ordered using their natural ordering, or by a Comparator provided at set creation time, depending on which constructor is used.

This implementation provides guaranteed log(n) time cost for the basic operations (add, remove, and contains).

This implementation is not synchronized. If you want to use TreeSet in multithread environment, you can using the Collections.synchronizedSortedSet(new TreeSet()) to wrap it.

Concurrent Set

CopyOnWriteArraySet

CopyOnWriteArraySet is a Set that uses an internal CopyOnWriteArrayList for all of its operations.

It shares basic properties:

  • It is best suited for applications in which set sizes generally stay small, read-only operations vastly outnumber mutative operations, and you need to prevent interference among threads during traversal.
  • It is thread-safe.
  • Mutative operations (add, set, remove, etc.) are expensive since they usually entail copying the entire underlying array.
  • Iterators do not support the mutative remove operation.
  • Traversal via iterators is fast and cannot encounter interference from other threads. Iterators rely on unchanging snapshots of the array at the time the iterators were constructed.

ConcurrentSkipListSet

ConcurrentSkipListSet is a scalable concurrent NavigableSet implementation based on a ConcurrentSkipListMap. The elements of the set are kept sorted according to their natural ordering, or by a Comparator provided at set creation time, depending on which constructor is used.

This implementation provides expected average log(n) time cost for the contains, add, and remove operations and their variants. Insertion, removal, and access operations safely execute concurrently by multiple threads.

Iterators and spliterators are weakly consistent.

Beware that, unlike in most collections, the size method is not a constant-time operation. Because of the asynchronous nature of these sets, determining the current number of elements requires a traversal of the elements, and so may report inaccurate results if this collection is modified during traversal. Additionally, the bulk operations addAll, removeAll, retainAll, containsAll, equals, and toArray are not guaranteed to be performed atomically. For example, an iterator operating concurrently with an addAll operation might view only some of the added elements.

Map

Hierarchy of Map:

(I)Map
|----Hashtable
|----(A)AbstractMap
|--------HashMap
|------------LinkedHashMap
|--------EnumMap
|--------IdentityHashMap
|--------WeakHashMap
|----(I)SortedMap
|--------(I)NavigableMap
|------------TreeMap
|----(I)java.util.concurrent.ConcurrentMap
|--------ConcurrentHashMap
|--------(I)ConcurrentNavigableMap
|------------ConcurrentSkipListMap

Map interface

An object that maps keys to values. A map connot contain duplicate keys, and each key can map to at most one value.

The Map interface provides three collection views, a set of keys, collection of values, or set of key-value mappings.

Methods of Map interface

  • Basic

    • V get(Object key)
    • V put(K key, V value)
    • V remove(Object key)
    • int size()
    • boolean isEmpty()
    • void clear()
    • boolean containsKey(Object key)
    • boolean containsValue(Object value)
    • void putAll(Map m)
    • default V putIfAbsent(K key, V value)
    • default V getOrDefault(Object key, V defaultValue)
    • default boolean remove(Object key, Object value)
    • default V replace(K key, V value)
    • default boolean replace(K key, V oldValue, V newValue)
  • View

    • Set entrySet()
    • Set keySet()
    • Collection values()
  • Function

    • default V compute(K key, BiFunction remappingFunction) - Attempts to compute a mapping for the specified key and its current mapped value. computeIfAbsent(...), computeIfPresent(...)
    • default void forEach(BiConsumer action)
  • default V merge(K key, V value, BiFunction remappingFunction)

    • default void replaceAll(BiFunction function)

Note that map means mapping keys to values, it does not specify how to mapping. The hash function is one of the ways to map keys to values.

HashTable

This class implements a hash table, which maps keys to values.

Hashtable is synchronized. If a thread-safe implementation is not needed, it is recommended to use HashMap in place of Hashtable. If a thread-safe highly-concurrent implementation is desired, then it is recommended to use ConcurrentHashMap in place of Hashtable.

AbstractMap

This class provides a skeletal implementation of the Map interface, to minimize the effort required to implement this interface.

It simply implemented some methods of Map just using iterator, like get(), containsKey().

HashMap

Hash table based implementation of the Map interface. It permits null values and null key. This class makes no guarantees order of the map. The HashMap class is roughly equivalent to HashTable, except that it is unsynchronized and permit nulls.

This implementation is not synchronized. If want to use a HashMap object in multithread environment, you can using Collections.synchronizeMap() to wrap it.

EnumMap

A specialized Map implementation for use with enum type keys. All of the keys in an enum map must come from a single enum type that is specified, explicitly or implicitly, when the map is created.

Enum maps are maintained in the natural order of their keys (the order in which the enum constants are declared). This is reflected in the iterators returned by the collections views (keySet(), entrySet(), and values()).

Iterators returned by the collection views are weakly consistent: they will never throw ConcurrentModificationException and they may or may not show the effects of any modifications to the map that occur while the iteration is in progress.

Null keys are not permitted. Attempts to insert a null key will throw NullPointerException.

Like most collection implementations EnumMap is not synchronized. If want to use a EnumMap object in multithread environment, you can using Collections.synchronizeMap() to wrap it.

IdentityHashMap

This class implements the Map interface with a hash table, using reference-equality in place of object-equality when comparing keys (and values). In other words, in an IdentityHashMap, two keys k1 and k2 are considered equal if and only if (k1==k2). For example, the key new String(“a”) and key “a” will different key in IdentityHashMap, but it’s same key in HashMap.

This class is not a general-purpose Map implementation! While this class implements the Map interface, it intentionally violates Map's general contract, which mandates the use of the equals method when comparing objects. This class is designed for use only in the rare cases wherein reference-equality semantics are required.

This implementation is not synchronized. If want to use a IdentityHashMap object in multithread environment, you can using Collections.synchronizeMap() to wrap it.

WeakHashMap

Hash table based implementation of the Map interface, with weak keys. An entry in a WeakHashMap will automatically be removed when its key is no longer in ordinary use. More precisely, the presence of a mapping for a given key will not prevent the key from being discarded by the garbage collector, that is, made finalizable, finalized, and then reclaimed. When a key has been discarded its entry is effectively removed from the map, so this class behaves somewhat differently from other Map implementations.

Each key object in a WeakHashMap is stored indirectly as the referent of a weak reference. Therefore a key will automatically be removed only after the weak references to it, both inside and outside of the map, have been cleared by the garbage collector.

Like most collection classes, this class is not synchronized. A synchronized WeakHashMap may be constructed using the Collections.synchronizedMap method.

Applicability

When you want to automatically remove useless objects from map, you can use WeakHashMap. You can manually remove elements of any types of map, but if don’t or you forget it. It may cause a memory leak.

How does the WeakHashMap Implement?

java.lang.ref.WeakReference

SortedMap interface

A Map that further provides a total ordering on its keys. The map is ordered according to the natural ordering of its keys, or by a Comparator typically provided at sorted map creation time. This order is reflected when iterating over the sorted map’s collection views (returned by the entrySet, keySet and values methods). Several additional operations are provided to take advantage of the ordering.

A SortedMap extended with navigation methods returning the closest matches for given search targets. Methods lowerEntry, floorEntry, ceilingEntry, and higherEntry return Map.Entry objects associated with keys respectively less than, less than or equal, greater than or equal, and greater than a given key, returning null if there is no such key. Similarly, methods lowerKey, floorKey, ceilingKey, and higherKey return only the associated keys. All of these methods are designed for locating, not traversing entries.

TreeMap

A Red-Black tree based NavigableMap implementation. The map is sorted according to the natural ordering of its keys, or by a Comparator provided at map creation time, depending on which constructor is used.

This implementation provides guaranteed log(n) time cost for the containsKey, get, put and remove operations.

This class is not synchronized. A synchronized TreeMap may be constructed using the Collections.synchronizedMap method.

Concurrent Maps

ConcurrentHashMap

What is it

A concurrent map implements by hash table, and supports full concurrency of retrievals and high concurrency for updates. It likes Hashtable, but it has more concurrency.

A hash table supporting full concurrency of retrievals and high expected concurrency for updates. This class obeys the same functional specification as Hashtable, and includes versions of methods corresponding to each method of Hashtable. However, even though all operations are thread-safe, retrieval operations do not entail locking, and there is not any support for locking the entire table in a way that prevents all access. This class is fully interoperable with Hashtable in programs that rely on its thread safety but not on its synchronization details.

ConcurrentSkipListMap

A scalable concurrent ConcurrentNavigableMap implementation. The map is sorted according to the natural ordering of its keys, or by a Comparator provided at map creation time, depending on which constructor is used.

Utility Classes

Collections

What Is It

This class consists of static methods that operate on or return collections.

Methods of Collections

  • Operations
    • static boolean addAll(Collection c, T... elements)
    • static int binarySearch(List list, T key), binarySearch(List list, T key, Comparator c)
    • static void copy(List dest, List src)
    • static boolean disjoint(Collection c1, Collection c2)
    • static void fill(List list, T obj)
    • static int frequency(Collection c, Object o)
    • static int indexOfSubList(List source, List target), int lastIndexOfSubList(List source, List target)
    • static T max(Collection c), max(Collection coll, Comparator comp)
    • static T min(Collection c), min(Collection coll, Comparator comp)
    • static boolean replaceAll(List list, T oldVal, T newVal)
    • static void reverse(List list)
    • static Comparator reverseOrder(), reverseOrder(Comparator comp)
    • static void rotate(List list, int distance)
    • static void shuffle(List list), shuffle(List list, Random rnd)
    • static void sort(List list), sort(List list, Comparator comp)
    • static void swap(List list, int i, int j)
  • Transforms
    • static Queue asLifoQueue(Deque deque)
    • static Collection checkedCollection(Collection c, Class type), checkedList(List list, Class type), checkedMap, checkedNavigableMap, …
    • static Enumeration enumeration(Collecdtion c)
    • static ArrayList List(Enumeration e)
    • static newSetFromMap(Map map)
    • static Collection synchronizedCollection(Collection c), synchronizedList(List list), synchronizedMap(Map map), synchronizedSet(Set set), …
    • static Collection unmodifiableCollection(Collection), unmodifiableList(List list), unmodifiableMap(Map map), …
  • Creations
    • static List emptyList(), emptyMap(), emptySet(), …
    • static List nCopies(int n, T o)
    • static Set singleton(T o), singletonList(T o), singletonMap(K key, V value)

Arrays

What Is It

This class contains various static methods for manipulating arrays (such as sorting and searching). This class also contains a static factory that allows arrays to be viewed as lists.

Methods of Arrays

  • Manipulations
    • static int binarySearch(byte[] a, byte key), binarySearch(byte[] a, int fromIndex, int toIndex, byte key), binarySearch(char[] a, char key), binarySearch(int[] a, int key), …
    • static boolean deepEquals(Object[] a1, object[] a2)
    • static int deepHashcode(Object[] a)
    • static String deepToString(Object[] a)
    • static boolean equals(int[] a, int[] a2), equals(byte[] a, byte[] a2), …
    • static void fill(int[] a, int val), fill(byte[] a, byte val), …
    • static int hashcode(int[] a), hashcode(byte[] a), …
    • static void parallelPrefix(int[] array, IntBinaryOperator op), …
    • static void setAll(int[] a, IntUnaryOperator generator), …
    • static void parallelSetAll(int[] a, IntUnaryOperator generator), …
    • static void sort(int[] a), sort(int[] a, int fromIndex, int toIndex), …
    • static void parallelSort(int[] a), parallelSort(int[] a, int fromIndex, int toIndex), parallelSort(byte[] a)
    • static Spliterator spliterator(int[] a), …
    • static Stream stream(int[] a), …
    • static toString(int[] a), …
  • Transforms
    • static int[] copyOf(int[] original, int newLength), copyOf(byte[] original, int newLength), …
    • static int[] copyOfRange(int[] original, int from, int to), copyOfRange(byte[] original, int from, int to), …
  • Creations
    • static List asList(T... a)

Container Features

Traversal

Fail Fast

when one thread using iterator for traversal, updating the container object by add, remove operations, the iterator thread will throw a ConcurrentModificationException.

Order

Natural Order

Elements order by method compareTo() of the class.

Consistency

Happens-before

This relationship is simply a guarantee that memory writes by one specific statement are visible to another specific statement in multiple threads.

For example, an variable int counter=0 shared in thread A and thread B. If thread A increment counter by counter++, thread B print counter by System.out.println(counter), the printed value would be “1”.

Concurrent Collections

Besides Queues, this package supplies Collection implementations designed for use in multithreaded contexts: ConcurrentHashMap, ConcurrentSkipListMap, ConcurrentSkipListSet, CopyOnWriteArrayList, and CopyOnWriteArraySet. When many threads are expected to access a given collection, a ConcurrentHashMap is normally preferable to a synchronized HashMap, and a ConcurrentSkipListMap is normally preferable to a synchronized TreeMap. A CopyOnWriteArrayList is preferable to a synchronized ArrayList when the expected number of reads and traversals greatly outnumber the number of updates to a list.

The “Concurrent” prefix used with some classes in this package is a shorthand indicating several differences from similar “synchronized” classes. For example java.util.Hashtable and Collections.synchronizedMap(new HashMap()) are synchronized. But ConcurrentHashMap is “concurrent”. A concurrent collection is thread-safe, but not governed by a single exclusion lock. In the particular case of ConcurrentHashMap, it safely permits any number of concurrent reads as well as a tunable number of concurrent writes. “Synchronized” classes can be useful when you need to prevent all access to a collection via a single lock, at the expense of poorer scalability. In other cases in which multiple threads are expected to access a common collection, “concurrent” versions are normally preferable. And unsynchronized collections are preferable when either collections are unshared, or are accessible only when holding other locks.

Most concurrent Collection implementations (including most Queues) also differ from the usual java.util conventions in that their Iterators and Spliterators provide weakly consistent rather than fast-fail traversal:

  • they may proceed concurrently with other operations
  • they will never throw ConcurrentModificationException
  • they are guaranteed to traverse elements as they existed upon construction exactly once, and may (but are not guaranteed to) reflect any modifications subsequent to construction.

Summary

Container Class Contrast Table

List

Class Type Size Order Null Iterator Time Cost Space Cost Thread Safe Impl
ArrayList Resizable Insert Order Allows Null Fail Fast Insert: O(n), remove: O(n), get: O(1) O(n) Not Thread Safe Array
Vector Resizable Insert Order Allows Null Fail Fast Insert: O(n), remove: O(n), get: O(1) O(n) Synchronized Array
LinkedList Resizable Insert Order Allows Null Fail Fast Insert: O(1), remove: O(1), get: O(n) O(n) Not Thread Safe Linked Nodes
CopyOnWriteArrayList Resizable Insert Order Allows Null Snapshot Insert: O(n), remove: O(n), get: O(1) O(n), Traverse and write need to copy Concurrent Array

Queue & Stack

Class Type Size Order Null Iterator Time Cost Space Cost Thread Safe Impl
Stack Resizable Insert Order Allows Null Fail Fast Pop: O(1), Push: O(1) O(n) Synchronized extends vector
ArrayDeque Resizable Insert Order Not Null Fail Fast Enqueue Dequeue: O(1) O(n) Not Thread Safe Arrays
PriorityQueue Resizable Natural Order, or Comparator Not Null Fail Fast Enqueue Dequeue: O(log(n)) O(n) Not Thread Safe Priority Heap by Arrays
LinkedBlockingDeque Optionally Bounded Insert Order Not Null Weakly Consistent Enqueue Dequeue: O(1) O(n) Concurrent Linked Nodes
LinkedTransferQueue Resizable Insert Order Not Null Weakly Consistent Enqueue Dequeue: O(1) O(n) Concurrent Linked Nodes
ArrayBlockingQueue Bounded Insert Order Not Null Weakly Consistent Enqueue Dequeue: O(1) O(n) Concurrent Array
LinkedBlockingQueue Optionally Bounded Insert Order Not Null Weakly Consistent Enqueue Dequeue: O(1) O(n) Concurrent Linked Nodes
PriorityBlockingQueue Resizable Natural Order, or Comparator Not Null Weakly Consistent Enqueue Dequeue: O(log(n)) O(n) Concurrent Priority Heap by Arrays
DelayQueue Resizable Insert Order Not Null Weakly Consistent Enqueue Dequeue: O(log(n)) O(n) Concurrent Wrapped Priority Queue
SynchronousQueue No Capacity No Exist Not Null Can’t Iterator Enqueue Dequeue: O(1) O(1) Concurrent Dual Stack and Dual Queue
ConcurrentLinkedQueue Resizable Insert Order Not Null Weakly Consistent Enqueue Dequeue: O(1) O(n) Concurrent Linked Nodes
ConcurrentLinkedDeque Resizable Insert Order Not Null Weakly Consistent Enqueue Dequeue: O(1) O(n) Concurrent Linked Nodes

Set

Class Type Size Order Null Iterator Time Cost Space Cost Thread Safe Impl
HashSet Resizable No Order Allows Single Null Fail Fast Insert: O(1), Remove: O(1), Contains: O(1) O(n) Not Thread Safe Hash Table or HashMap
LinkedHashSet Resizable Insert Order Allows Single Null Fail Fast Insert: O(1), Remove: O(1), Contains: O(1) O(n) Not Thread Safe Hash Table or LinkedHashMap
CopyOnWriteArraySet Resizable Insert Order Allows Single Null Snapshot Insert: O(n), Remove: O(n), Contains: O(n) O(n), Cost to Write Concurrent CopyOnWriteArrayList
TreeSet Resizable Natural Order, or Comparator Not Null Fail Fast Insert: O(log(n)), Remove: O(log(n)), Contains: O(log(n)) O(n) Not Thread Safe TreeMap
ConcurrentSkipListSet Resizable Natural Order, or Comparator Not Null Weakly Consistent Insert: O(log(n)), Remove: O(log(n)), Contains: O(log(n)) O(n) Concurrent ConcurrentSkipListMap

Map

Class Type Size Order Null Iterator Time Cost Space Cost Thread Safe Impl
HashTable Resizable No Order Not Null Fail Fast Insert: O(1), Remove: O1), Get: O(1) O(n) Synchronized Hash Table (Linked Array)
HashMap Resizable No Order Allows Single Null Fail Fast Insert: O(1), Remove: O1), Get: O(1) O(n) Not Thread Safe Hash Table (Linked Array)
LinkedHashMap Resizable Insert Order Allows Single Null Fail Fast Insert: O(1), Remove: O1), Get: O(1) O(n) Not Thread Safe Hash Table and Linked List
EnumMap Bounded Natural Order of keys Not Null Not Fail Fast Insert: O(1), Remove: O1), Get: O(1) O(n) Not Thread Safe Object Array and enum types as index of array
IdentityHashMap Resizable No Order Allows Single Null Fail Fast Insert: O(1), Remove: O1), Get: O(1) O(n) Not Thread Safe Hash Table
WeakHashMap Resizable No Order Allows Single Null Fail Fast Insert: O(1), Remove: O1), Get: O(1) O(n) Not Thread Safe Hash Table
TreeMap Resizable Natural Order, or Comparator Not Null Fail Fast Insert: O(log(n)), Remove: O(log(n)), Get: O(log(n)) O(n) Not Thread Safe Red Black Tree
ConcurrentHashMap Resizable No Order Not Null Weakly Consistent Insert: O(1), Remove: O1), Get: O(1) O(n) Concurrent Hash Table
ConcurrentSkipListMap Resizable Natural Order, or Comparator Not Null Weakly Consistent Insert: O(log(n)), Remove: O(log(n)), Get: O(log(n)) O(n) Concurrent Tree-Like Two-Dimensionally Linked SkipList

Container Classes core features

  • bounded, or unbounded
  • ordered, or disordered (insert Order, natural Order, comparator Order)
  • null value
  • traverse operation is fail-fast or weakly consistent
  • performance
    • read, write and traversal operations time and space cost.
  • thread safe: concurrency, consistency
    • synchronized
    • concurrent
    • consistent

How to select a effective container class to use

List

  • ArrayList and LinkedList are common lists. ArrayList implemented by array, it is fast to random access, but slow to insert and remove. LinkedList implemented by nodes, it is fast to insert and remove, but slow to random access.
  • Vector is thread-safe list by synchronized. If you want to use list in multithread environment, you can choose it.
  • CopyOnWriteArrayList is also thread-safe list, and It is more concurrent than synchronized, but its write operations is very costly. If you want to use list in multithread environment with high concurrent, and operations are almost read operations(90%+), you can choose it. You will have high concurrent in read and write operations.

Stack

  • If you want to use stack in single thread environment, you can use ArrayDeque and LinkedList, or construct by wrapper ArrayList or array. Selection priority: ArrayDeque > LinkedList
  • If you want to use stack with thread-safe. You can use Stack or ConcurrentLikedDeque. The Stack has strictly consistent and poor concurrency. the ConcurrentLinkedDeque has high concurrency and weakly consistent.

Queue

  • In single thread environments, the ArrayDeque is the most common queue. If you want a queue has priority, you can use PriorityQueue.
  • In multithread environments, There have two type queues: blocking queue and concurrent queue. Blocking queues is commonly using for the producer-consumer scenario. About blocking queue selection, blocking queues for general using you can select LinkedBlockingDeque, ArrayBlockingQueue, and LinkedBlockingQueue. There are many special features blocking queues LinkedTransferQueue, PriorityBlockingQueue, DelayQueue, and SynchronousQueue that for special using scenario. You can use it according to their features.
  • Another thread-safe queue type is concurrent queues that have high concurrency. Concurrent queues ConcurrentLinkedQueue and ConcurrentLinkedDeque are very similar, just ConcurrentLinkedDeque can use as both queue and stack.

Set

  • In single thread environments, the HashSet is the most common set. Other sets using in single thread environments are LinkedHashSet and TreeSet. The LinkedHashSet is similar to HashSet, but it has additional functionality that keeps elements insert order by linking them up. The TreeSet has lower performance than HashSet, but it can keep elements order with natural order or comparator.
  • In multithread environments, We can use thread-safe sets CopyOnWriteArraySet and ConcurrentSkipListSet. The CopyOnWriteArraySet only use when most of operations (90%+) are reads. The ConcurrentSkipListSet use when you want to keep elements Order.

Map

  • In single thread environments, the HashMap is the most common map. It’s for general using. Other maps using in single thread environments are LinkedHashMap, EnumMap, IdentityHashMap, WeakHashMap, TreeMap. The LinkedHashMap is similar to HashMap, but it keeps elements insert order. The EnumMap use when all keys of map are from an Enum types. The IdentityHashMap and WeakHashMap are rarely using, just using in special scenario according to their features. The TreeMap is lower performance than HashMap, but it keep elements of map order with natural order or comparators.
  • In multithread environments, you can choose Hashtable or concurrent maps. The Hashtable is a thread-safe map by synchronized. It’s strictly consistent and lowly concurrent. If you want to thread-safe and highly concurrent map, you can choose concurrent maps ConcurrentHashMap or ConcurrentSkipListMap. The ConcurrentHashMap is similar to HashMap, but have more concurrent. The ConcurrentSkipListMap have high concurrency and keeps elements order.

String

  • String is the most common class for representing a string. It’s final or immutable. If you want to concatenate multiple strings to one, you better use StringBuilder, it doesn’t generate middle strings that saving memory cost.
  • The StringBuffer is similar to StringBuilder, the only difference between StringBuffer and StringBuilder that is StringBuffer is thread-safe by synchronized.

References

[1] Java SE 8 API Documentation

[2] Java Source Code - java.util, java.util.concurrent

[3] Why CopyOnWriteArrayList copys when writing?

[4] In what situations is the CopyOnWriteArrayList suitable?

[5] How can CopyOnWriteArrayList be thread-safe?

[6] ReentrantLock Example in Java, Difference between synchronized vs ReentrantLock

[7] Difference between BlockingQueue and TransferQueue

[8] Memory Consistency Errors

–END–

This post will discuss content in Java lambda and stream. It’s based Java SE 8.

Stream

Stream

What is Stream

A sequence of elements supporting sequential and parallel aggregate operations.

Special Stream class (like IntStream) that they provide additional methods for its type of stream (like sum() of IntStream)

To perform a computation, stream operations are composed into a stream pipeline. A stream pipeline consist of source, zero or more intermediate operations, and a terminal operation or forEach(Consume). Streams are lazy; computation on the source data is only performed when the terminal operation is initiated, and source elements are consumed only as needed.

Streams do not provide a means to directly access or manipulate their elements, and are instead concerned with declaratively describing their source and the computational operations which will be performed in aggregate on that source. However, if the provided stream operations do not offer the desired functionality, the BaseStream.iterator() and BaseStream.spliterator() operations can be used to perform a controlled traversal.

A stream should be operated on (invoking an intermediate or terminal stream operation) only once. Since some stream operations may return their receiver rather than a new stream object, it may not be possible to detect reuse in all cases.

Methods of Stream interface

Intermediate Operations

  • Filter
    • Stream distinct()
    • Stream filter(Predicate<? super T> predicate)
    • Stream limit(long maxSize)
    • Stream skip(long n)
  • Map (or Convert)
    • Stream flatMap(Function mapper)
    • DoubleStream flatMapToDouble(Function mapper)
    • IntStream flatMapToInt(Function mapper)
    • LongStream flatMapToLong(Function mapper)
    • Stream map(Function mapper)
    • DoubleStream mapToDouble(ToDoubleFunction mapper)
    • IntStream mapToInt(ToIntFunction mapper)
    • LongStream mapTOLong(ToLongFunction mapper)
  • Operate stream elements
    • Stream peek(Consumer action)
    • Stream sorted()
    • Stream sorted(Comparator comparator)

Terminal Operations

  • Testing

    • boolean allMatch(Predicate predicate)
    • boolean anyMatch(Predicate predicate)
    • boolean noneMatch(Predicate predicate)
    • Optional findAny()
    • Optional findFirst()
  • Get Converted Object Value

    • R collect(Collector<? super T,A,R> collector)
    • Object[] toArray()
    • A[] toArray(IntFunction generator)
  • Get Computing Result

    • long count()
    • Optional max(Comparator comparator)
    • Optional min(Comparator comparator)
    • Optional reduce(BinaryOperator accumulator)
    • T reduce(T identity, BinaryOperator accumulator)
  • Traversing Object

    • void forEach(Consumer action)
    • void forEachOrdered(Consumer action)

Static Methods

  • static Stream empty()
  • static <T> Stream.Builder<T> builder()
  • static Stream concat(Stream<? extends T> a, Stream<? extends T> b)
  • static Stream generate(Supplier s)
  • static Stream iterate(T seed, UnaryOperator f)
  • static Stream of (T... values)
  • static Stream of (T t)

Methods of BaseStream interface

  • void close()
  • boolean isParallel()
  • Iterator<T> iterator()
  • S onClose(Runnable closeHandler)
  • S parallel()
  • S sequential()
  • Spliterator<T> spliterator()
  • S unordered()

Methods of IntStream interface

Intermediate Operations

  • Stream<Integer> boxed()

Terminal Operations

  • OptionalDouble average()
  • int sum()

Examples

Map (or Convert)

List<String> letters = Arrays.asList("a", "b", "c");
List<String> collect = letters.stream().map(String::toUpperCase).collect(Collectors.toList());
System.out.println(collect);
int sum = widgets.stream()
.filter(w -> w.getColor() == RED)
.mapToInt(w -> w.getWeight())
.sum();

Testing

Stream<Integer> stream = Stream.of(1, 1, 3);
stream.allMatch(i -> {return i > 1;})

Get Converted Object Value

Stream<Integer> stream = Stream.of(1, 1, 3);
stream.filter(i -> {return i > 1}).collect(Collectors.toList());

Get Computing Result

Stream.of(1, 1, 3).count();
Stream<Integer> stream = Stream.of(1, 2, 3, 2, 1);
int sum = stream.reduce(0, (i, j) -> {
return i + j;
}));
int product = stream.reduce(1, (i, j) -> {
return i * j;
});

Traversing Object

Stream<Integer> stream = Stream.of(1, 2, 3);
stream.forEach(i -> System.out.println(i));
Stream<Integer> stream = Stream.of(1, 2, 3, 2, 1);
stream.sorted((o1, o2) -> {
return o1 - o2;
}).forEachOrdered(System.out::print);

Collector

A mutable reduction operation that accumulates input elements into a mutable result container, optionally transforming the accumulated result into a final representation after all input elements have been processed. Reduction operations can be performed either sequentially or in parallel.

Collectors

Implementations of Collector that implement various useful reduction operations, such as accumulating elements into collections, summarizing elements according to various criteria.

Function

Function

Function is a functional Interface

  • R apply(T t)

Example

Function<Integer, Double> half = a -> a/2.0;
System.out.println(half.apply(10));

Predicate

Predicate is a functional Interface

  • boolean test(T t)

Example

Predicate<Integer> predicate = i -> {return i > 1;};
System.out.println(predicate.test(2));
public boolean testPredicate(int val, Predicate<Integer> predicate){
return predicate.test(val);
}
public void test(){
assertTrue(testPredicate(2, i -> i > 1));
assertFalse(testPredicate(1, i -> i > 1));
}

Consumer

Consumer is a functional Interface

  • void accept(T t)

Example

Consumer consumer = System.out::println;
consumer.accept("hello Consumer");

Supplier

Supplier is a functional Interface

  • T get()

Example

Supplier supplier = () -> {return "hello Supplier";};
System.out.println(supplier.get());
public static void testSupplier(Supplier supplier){
System.out.println(supplier.get());
}
public static void main(String[] args) {
testSupplier(()->"hello");
}

BinaryOperator

BinaryOperator is a functional interface, and it extends BiFunction<T, U, R>

  • R apply(T t, U u)

Lambda Expressions and Method References

what is lambda expression?

Implements anonymous inner class of interface that has a single method and have annotation @FuncationalInterface can using lambda expression.

Example 1

Thread thread = new Thread(new Runnable(){
void run(){
System.out.println("annoymous inner class");
}
});
Thread thread = new Thread(()->{
System.out.println("lambda expression");
});

Example 2

File dir = new File("{dir_path}");
File[] files = dir.listFiles(new FileFilter(){
@Override
public boolean accept(File file){
return file.getName().endsWith(".log");
}
});
File dir = new File("{dir_path}");
File[] files = dir.listFiles((file)->{
return file.getName().endsWith(".log");
});

What is Method Reference?

You use lambda expressions to create anonymous methods. Sometimes, however, a lambda expression does nothing but call an existing method. In those cases, it’s often clearer to refer to the existing method by name. Method references enable you to do this; they are compact, easy-to-read lambda expressions for methods that already have a name.

Kinds of Method References

There are four kinds of method references:

Kind Example
Reference to a static method ContainingClass::staticMethodName
Reference to an instance method of a particular object containingObject::instanceMethodName
Reference to an instance method of an arbitrary object of a particular type ContainingType::methodName
Reference to a constructor ClassName::new

Examples of Usage

public class People {

private String name;

public People(String name) {
this.name = name;
}

public String getName() {
return this.name;
}

public boolean compareName(String name) {
return this.name == name;
}

public static String convertToUpperCase(String s) {
return s.toUpperCase();
}

public static void main(String[] args) {
// Reference to an instance method of an arbitrary object of a particular type
Function<People, String> getName = People::getName;
System.out.println(getName.apply(new People("Tom"))); // Tom

// Reference to an instance method of a particular object
People people = new People("Jack");
Function<String, Boolean> compareName = people::compareName;
System.out.println(compareName.apply("Jack")); // true

// Reference to a static method
Function<String, String> convertToUpperCase = People::convertToUpperCase;
System.out.println(convertToUpperCase.apply("abc")); // ABC

// Reference to a constructor
Function<String, People> getInstance = People::new;
System.out.println(getInstance.apply("John").getName()); // John
}
}

Java Util

Optional

What is Optional?

Convert a object to Optional object, if the value is not null, isPresent() will return true and get() return the value; if the value is null, isPresent() will return false and get() return empty Optional object.

In short, you can either get value or empty optional from an optional object, and you never get null.

Optional Methods

Static Methods

  • static Optional empty() // get empty Optional
  • static Optional of(T value) // if value is null, it will throw NullPointerException
  • static Optional ofNullable(T value) // if value is null, return empty Optional

Intermediate Operations

  • Optional filter(Predicate predicate)
  • Optional flatMap(Function mapper)
  • Optional map(Function mapper)

Terminal Operations

  • Test
    • void ifPresent(Consumer consumer)
    • boolean ifPresent()
  • Get
    • T get()
    • T orElse(T other)
    • T orElseGet(Supplier other)
    • T orElseThrow(Supplier exceptionSupplier)

Example

int getNotNullValueOrDefault(Integer obj, int defaultValue){
Optional<Integer> optional = Optional.ofNullable(obj);
return optional.orElse(defaultValue);
}

Comparator

Comparator is a functional interface

  • int compare(T o1, T o2)

Spliterator

Spliterator is for traversing and partitioning elements of a source. The source of elements covered by a Spliterator could be, for example, an array, a Collection, an IO channel, or a generator function.

When used in a parallel programming solution, bear in mind that the individual Spliterator is not thread safe; instead the parallel processing implementation must insure that each individual Spliterator is handled by one thread at a time. A thread calling the trySplit() of a Spliterator, may hand over the returned new Spliterator to a separate thread for processing. This follows the idea of decomposition of a larger task into smaller sub-tasks that can be processed in parallel individually of each other.

Note Spliterator likes Stream every operate only once. Can’t traverse the same spliterator more than once.

Methods of Spliterator

  • Characteristics
    • int characteristics()
    • default boolean hasCharacteristics(int characteristics)
  • Traversal
    • default void forEachRemaining(Consumer action)
    • boolean tryAdvance(Consumer action)
  • Others
    • long estimateSize()
    • default long getExactSizeIfKnow()
    • default Comparator getComparator()
    • Spliterator trySplit()

Example

characteristics

List<Integer> list = new ArrayList<>(Arrays.asList(1, 2, 3));
Spliterator<Integer> spliterator = list.spliterator();
System.out.println(spliterator.hasCharacteristics(Spliterator.SIZED)); // true
int expected = Spliterator.ORDERED | Spliterator.SIZED | Spliterator.SUBSIZED;
System.out.println(spliterator.characteristics() == expected); // true

traversal

spliterator.forEachRemaining(System.out::println);
while (spliterator.tryAdvance(System.out::println));

split

List<Integer> list = new ArrayList<>(Arrays.asList(1, 2, 3));
Spliterator<Integer> spliterator = list.spliterator();
Spliterator<Integer> newPartition = spliterator.trySplit();
spliterator.forEachRemaining(System.out::println);
System.out.println("======");
newPartition.forEachRemaining(System.out::println);

Summary

Stream is operations on collections, and is similar to multiple times “for loop”, but it’s more concise than multiple times “for loop” and it provides many convenient operations implementation. Additionally, it can sequential or parallel operating container elements.

Functional Interface represents the class that can be use lambda expression.

Lambda Expression is similar to implementation of anonymous inner class, but it’s more concise than anonymous inner class.

Method Reference is similar to lambda expression, but it’s more concise than Lambda Expression. It is a replacement of method implementation of an anonymous inner class when they have the same input and output of the method. It’s more convenient and concise than Lambda Expression.

References

[1] Java API docs

[2] Lambda Expressions - The Java Tutorials

[3] Method References - The Java Tutorials

This post will discuss content in Java IO streams. It is based Java SE 8.

Input Stream

Input Stream Methods

  • int available(): return available to read the remaining byte size.
  • void close(): close the input stream.
  • void mark(int readlimit): marks the current position in this input stream.
    • readlimit: the number of bytes that can be read from the stream after setting the mark before the mark becomes invalid. Note the parameter for some subclass of InputStream has no meaning.
  • boolean markSupported(): return if this input stream supports the mark and reset methods.
  • int read(): reads the next byte of data.
  • int read(byte[] b): reads some bytes from input stream and stores them into buffer array b.
  • int read(byte[] b, int off, int len): reads up to len bytes of data from input stream into array of bytes.
  • void reset(): repositions this stream to the position at mark (default 0)
  • long skip(long n): skips bytes of data.

What does Input Stream actually do?

InputStream only reads data from front to back. It can skip some data to read. It can back to specified position to read again.

InputStream can read bytes one by one or read some bytes into byte array once.

If there is no available data in InputStream, the calling one of the read methods will return -1.

int read(byte b[]) => int read(byte b[], int offset, int len), abstract int read()

read() methods can implement directly (e.g ByteArrayInputStream) or implement by calling the native method (e.g FileInputStream).

InputStream Types

  • ByteArrayInputStream: Byte array as data
    • ByteArrayInputStream(byte[] b)
  • FileInputStream: File as data
    • FileInputStream(File file)
  • ObjectInputStream: Deserializes primitive data and objects previously written using an ObjectOutputStream.
    • ObjectInputStream(InputStream in)
    • readObject()
  • PipedInputStream: A piped input stream should be connected to a piped output stream; the piped input stream then provides whatever data bytes are written to the piped output stream.
    • PipedInputStream()
    • connect(PipedOutputStream src)
  • StringBufferStream(Deprecated): String as data.
    • StringBufferInputStream(String s)
  • SequenceInputStream: Logical concatenation of other input streams.
    • SequenceInputStream(InputStream s1, InputStream s2)
  • FilterInputStream: It contains some other input stream as data, and transforming data or providing additional functionality.
    • BufferedInputStream: read data from the internal buffer array rather than read data from input stream every time.
      • BufferedInputStream(InputStream in)
    • PushbackInputStream: unread one byte.
      • PushbackInputStream(InputStream in)
      • unread(int byteValue)
    • DataInputStream: read primitive Java data types.
      • DataInputStream(InputStream in)
      • readInt()
    • LineNumberInputStream(Deprecated): provides the added functionality of keeping track of the current line number.
      • LineNumberInputStream(InputStream in)
      • getLineNumber()

OutputStream

OutputStream Methods

  • void close(): close output stream and release resources.
  • void flush(): Flushes output stream and forces buffered bytes to be written out.
  • void write(byte[] b): Writes byte array.
  • void write(byte[] b, int offset, int len): Writes len bytes from the byte array starting at offset.
  • void write(int b): Writes the specified byte.

What does Output Stream actually do?

Write one byte or some bytes data to output stream.

If output stream doesn’t call the flush or close method, some data written in buffer array may don’t write to the stream.

Some output streams have the buffer array, others not.

OutputStream Types

  • ByteArrayOutputStream: Writes data to a byte array. retrieve data using toByteArray() and toString().
    • toByteArray()
    • toString()
  • FileOutputStream: Writes data to a File or FileDescriptor.
    • FileOutputStream(File file)
  • ObjectOutputStream: Writes primitive data types and java objects to an OuputStream.
    • ObjectOutputStream(OutputStream out)
    • writeObject(Object obj)
    • writeInt(int val)
  • PipedOutputStream: Sending data to pipe.
  • FilterOutputStream: Sit on top of an already existing output stream. It transforming the data or providing additional functionality.
    • BufferedOutputStream: Writes data to buffer array. Avoid write data to stream every time.
      • BufferedOutputStream(OutputStream out)
    • DataOutputStream: Write primitive Java data types to output stream.
      • writeChars(String s)
      • writeInt(int val)
    • PrintStream: Print various values to output stream.
      • println(String s)
      • write(int b)

Notes

if not implements Serializable interface, calling writeObject() will throw java.io.NotSerializableException

Reader

Reader Methods

  • abstract void close()
  • void mark(int readAheadLimit)
  • boolean markSupported()
  • int read()
  • int read(char[] cbuf)
  • abstract int read(char[] cbuf, int off, int len)
  • int read(CharBuffer target)
    • java.nio.CharBuffer target is using for read characters.
  • boolean ready(): return whether this stream is ready to be read. It’s same with InputStream.available()
  • void reset()
  • long skip(long n)

What does Reader actually do?

Read one character to return or read some characters into char array from somewhere, e.g char array, file, pipe.

Reader Types

Description like InputStream

  • BufferedReader
    • BufferedReader(Reader in)
    • readLine()
  • LineNumberReader
    • LineNumberReader(Reader in)
    • readLine()
    • getLineNumber()
  • CharArrayReader
    • CharArrayReader(char c[])
  • Abstract FilterReader
  • PushbackReader
    • PushbackReader(Reader in)
    • unread(int val)
  • InputStreamReader
    • InputStreamReader(InputStream in)
  • FileReader
    • FileReader(File file)
  • PipedReader
    • connect(PipedWriter src)
  • StringReader
    • StringReader(String s)

Writer

Writer Methods

  • Writer append(char c)
  • Writer append(CharSequence csq)
    • Same with write(String str), but it can chained call, and String parameter can be null.
  • Writer append(CharSequence csq, int start, int end)
  • abstract void close()
  • abstract void flush()
  • void write(char[] cbuf)
  • abstract void write(char cbuf, int offset, int len)
  • void write(int c)
  • void write(String str)
  • void write(String str, int offset, int len)

What does Writer actually do?

Writes one character or write some characters to somewhere, e.g char array, file, pipe.

Writer Types

  • BufferedWriter
    • BufferedWriter(Writer out)
    • void newLine()
  • OutputStreamWriter
    • OutputStreamWriter(OutputStream out, Charset cs)
  • FileWriter
    • FileWriter(File file)
  • StringWriter
    • StringBuffer getBuffer()
  • CharArrayWriter
    • String toString()
    • char[] toCharArray()
  • abstract FilterWriter
  • PipedWriter
  • PrintWriter
    • PrintWriter(OutputStream out)
    • PrintWriter(Writer out)
    • void println(String s)

Standard IO Streams

java.lang.System

  • PrintStream out
  • InputStream in
  • PrintStream err

Interfaces

java.lang.AutoClosable: Auto close resources.

  • void close(): This method is invoked automatically on objects managed by the try-with-resources statement.

Closeable: resources can be closed.

  • void close()

DataInput: provide reading data in any of the Java primitive types.

  • boolean readBoolean()
  • int readInt()

DataOutput: provide writing data in any of the Java primitive types.

  • void writeBoolean(int b)
  • void writeInt(int val)

Externalizable: extends Serializable interface. It using for object serialization.

  • void readExternal(ObjectInput in)
  • void writeExternal(ObjectOutput out)

FileFilter: It is a functional interface and can be used for a lambda expression or method reference. Filter files.

  • boolean accpet(File pathname)

FileNameFilter: It is a functional interface and can be used for a lambda expression or method reference. Filter file names.

  • boolean accept(File dir, String name)

Flushable: data can be flushed.

  • void flush(): Flushes buffered output to the underlying stream.

ObjectInput: extends DataInput interface. Allows reading object.

  • Object readObject()

ObjectInputValidation: Callback interface to allow validation of objects within a graph. It has no implemented subclasses.

ObjectOutput: extends DataOutput interface. Allows writing object.

  • void writeObject(Object obj)

ObjectStreamConstants: Constants written into the Object Serialization Stream.

Serializable: Allows class serializability. It has no methods.

File

File

  • Fields
    • static String pathSeparator: system dependent path separator, e.g ;.
    • static String separator: system dependent name separator, e.g /.
  • Constructors
    • File(File parent, String child), File(String pathname), File(String parent, Sring child), File(URI uri)
  • File Permission
    • boolean canExecute(), boolean canRead(), boolean canWrite()
    • boolean setExecutable(boolean executable), boolean setWritable(boolean writable)
  • File Handling
    • static File createNewFile() , static File createTempFile(String prefix, String suffix, File directory), boolean mkdir(), boolean mkdirs(), renameTo(File dest)
    • boolea delete(), boolean deleteOnExit()
    • int compareTo(File pathname): compares two abstract pathnames lexicographically.
    • String[] list(), String[] list(FilenameFilter filter), File[] listRoots()
    • Path toPath(), URI toURI(), URL toURL()
  • File Information
    • boolean exists()
    • isAbsolute(), isDirectory(), isFile(), isHidden()
    • String getAbsolutePath(), String getCanonicalPath(), String getName(), File getParentFile(), String getPath()
    • long getFreeSpace(), long getTotalSpace(), long getUsableSpace(), long lastModified(), long length()

FileDescriptor

Whenever a file is opened, the operating system creates an entry to represent this file and stores its information. Each entry is represented by an integer value and this entry is termed as file descriptor. [2]

Basically, Java class FileDescriptor provides a handle to the underlying machine-specific structure representing an open file, an open socket, or another source or sink of bytes. [2]

The applications should not create FileDescriptor objects, they are mainly used in creation of FileInputStream or FileOutputStream objects to contain it. [2]

File Paths

Get file, absolute filepath and inputStream in classpath, e.g src/resources/test.txt

You can get InputStream by using ClassLoader.getResourceAsStream() method.

InputStream in = getClass().getClassLoader().getResourceAsStream("test.txt");

You can get file object and file path by using ClassLoader get URL, then get them.

URL url = getClass().getClassLoader().getResource("test.txt");
File file = new File(url.getFile());
String filepath = file.getAbsolutePath();

Difference between getResource() methods:

getClass().getClassLoader().getResource("test.txt") //relative path
getClass().getResource("/test.txt")); //note the slash at the beginning

Get file, absolute file path and inputStream in JAR, e.g src/resources/test.txt

This is deliberate. The contents of the “file” may not be available as a file. Remember you are dealing with classes and resources that may be part of a JAR file or other kind of resource. The classloader does not have to provide a file handle to the resource, for example the jar file may not have been expanded into individual files in the file system. [3]

Anything you can do by getting a java.io.File could be done by copying the stream out into a temporary file and doing the same, if a java.io.File is absolutely necessary. [3]

Result: You can get input stream and File by ClassLoader, but you can’t get right usable absolute file path. If you want get a usable absolute path in JAR, you can copy resource file stream to create a new temporary file, then get absolute file path of the temporary file.

File file = null;
String resource = "/test.txt";
URL res = getClass().getResource(resource);
if (res.getProtocol().equals("jar")) {
try {
InputStream input = getClass().getResourceAsStream(resource);
file = File.createTempFile("tempfile", ".tmp");
OutputStream out = new FileOutputStream(file);
int read;
byte[] bytes = new byte[1024];

while ((read = input.read(bytes)) != -1) {
out.write(bytes, 0, read);
}
out.close();
file.deleteOnExit();
} catch (IOException ex) {
Exceptions.printStackTrace(ex);
}
} else {
//this will probably work in your IDE, but not from a JAR
file = new File(res.getFile());
}

System.out.println(file.getAbsolutePath());

Summary

IO Streams Functionality

InputStream or Reader it read data from somewhere, e.g byte or char array, file, pipe. It can read single-unit data from the stream to return or read multiple units data from the stream to store into an array every time. read() methods can implement by java code or implements by calling native method. Filter streams use the Decorator Pattern to add additional functionality (e.g unread) that implements by internal buffer array to other stream objects.

OutputStream or Writer it write data to somewhere, e.g byte or char array, file, pipe.

IO Stream Types

  • Byte or Char Array
  • File
  • Object
  • Piped
  • Sequence:
  • Filter
    • Buffered: Operating with buffer array.
    • DataInput, DataOutput: Operating with primitive Java types
    • Pushback: Add unread functionality.
    • Print:
    • LineNumber

Others

class java.nio.CharBuffer using in method int read(CharBuffer target)

interface java.lang.CharSequence using in method Writer append(CharSequence csq)

Questions

InputStream vs Reader? What the real difference between them? Who has more efficient?

Stream is operating bytes, and reader or writer is operating characters. Writing streams of raw bytes such as image data using OutputStream. Writing streams of characters using Writer.

CharSequence vs String

A CharSequence is a readable sequence of char values. You can call chars() method get the InputStream.

References

[1] Java SE 8 API Document

[2] Java File Descriptor Example

[3] How to get a path to a resource in a Java JAR file - Stack Overflow

Content

  • Introduction to Servlet
  • The Servlet Interface
  • The Request
  • Servlet Context
  • The Response
  • Filtering
  • Sessions
  • Dispatching Requests
  • Web Applications
  • Application Lifecycle Events
  • Mapping Requests to Servlets
  • Security

Introduction to Servlet

What is a Servlet

Servlet 是基于 Java 的 Web component,它是被 servlet container 管理的,它可以用来生成动态的内容。Servlet 是平台独立的 java class,可以运行在支持 servlet container 的 Web Server 中。Servlet 通过 Servlet 容器实现的 request/response 范例与 Web 客户端进行交互。

What is a Servlet Container

Servlet container (有时也叫做 servlet engine)它是 Web server 或 application server 的一部分,它提供了发送 request 和 response 的网络服务,解析基于 MIME 的 requests,以及格式化基于 MIME 的 responses。Servlet Container 还管理 Servlets 的整个 lifecycle。

Servlet container 可以内置到 Web server 中,也可以通过 Web server 的扩展 API 作为附加组件安装到 Web server 中。Servlet container 也可以内置或安装在 application servers。

所有 servlet containers 必须支持 HTTP 作为请求和响应的协议。其它基于 request/response 的协议也可能支持如 HTTPS。

Why do we need Servlet

实现基于 HTTP 协议的 Java Web 应用程序,我们需要使用 servlet 技术来生成动态的响应内容。

Why do we need Servlet Container

Servlet 只是一个 Java 类,它接收 request 对象和响应 response 对象。然而,一个应用中有很多 servlets,我们需要一个容器来管理这些 servlets 的创建和销毁,以及去解析和生成网络协议的报文。

How they work

  • Client 发出一个 HTTP 请求访问 Web server。
  • Web server 接收到请求,将请求交给 servlet container。servlet container 可以在与主机 Web server 相同的进程中运行,可以在同一主机上的不同进程中运行,也可以在与其处理请求的 Web server 不同的主机上运行。
  • Servlet container 根据配置决定调用哪个 servlet,并将表示 request 和 response 的对象传递给 servlet。
  • Servlet 利用 request 对象处理逻辑,生成响应的数据。
  • 一旦 servlet 处理完了请求,servlet container 确保 response 正确地 flushed,并且将控制权返回给 Web server。

Servlet History

Servlet versions history

Servlet API version Released Specification Platform Important Changes
Servlet 4.0 Sep 2017 JSR 369 Java EE 8 HTTP/2
Servlet 3.1 May 2013 JSR 340 Java EE 7 Non-blocking I/O, HTTP protocol upgrade mechanism (WebSocket)[14]
Servlet 3.0 December 2009 JSR 315 Java EE 6, Java SE 6 Pluggability, Ease of development, Async Servlet, Security, File Uploading
Servlet 2.5 September 2005 JSR 154 Java EE 5, Java SE 5 Requires Java SE 5, supports annotation
Servlet 2.4 November 2003 JSR 154 J2EE 1.4, J2SE 1.3 web.xml uses XML Schema
Servlet 2.3 August 2001 JSR 53 J2EE 1.3, J2SE 1.2 Addition of Filter
Servlet 2.2 August 1999 JSR 902, JSR 903 J2EE 1.2, J2SE 1.2 Becomes part of J2EE, introduced independent web applications in .war files
Servlet 2.1 November 1998 2.1a Unspecified First official specification, added RequestDispatcher, ServletContext
Servlet 2.0 December 1997 N/A JDK 1.1 Part of April 1998 Java Servlet Development Kit 2.0[15]
Servlet 1.0 December 1996 N/A Part of June 1997 Java Servlet Development Kit (JSDK) 1.0[9]

Servlet versions and apache tomcat versions

Servlet Spec JSP Spec EL Spec WebSocket Spec JASPIC Spec Apache Tomcat Version Latest Released Version Supported Java Versions
4.0 2.3 3.0 1.1 1.1 9.0.x 9.0.31 8 and later
3.1 2.3 3.0 1.1 1.1 8.5.x 8.5.51 7 and later
3.1 2.3 3.0 1.1 N/A 8.0.x (superseded) 8.0.53 (superseded) 7 and later
3.0 2.2 2.2 1.1 N/A 7.0.x 7.0.100 6 and later (7 and later for WebSocket)
2.5 2.1 2.1 N/A N/A 6.0.x (archived) 6.0.53 (archived) 5 and later
2.4 2.0 N/A N/A N/A 5.5.x (archived) 5.5.36 (archived) 1.4 and later
2.3 1.2 N/A N/A N/A 4.1.x (archived) 4.1.40 (archived) 1.3 and later
2.2 1.1 N/A N/A N/A 3.3.x (archived) 3.3.2 (archived) 1.1 and later

Servlet Example: HelloWorld

1.Generating maven project

$ mvn archetype:generate -DgroupId=com.taogen.example -DartifactId=servlet-helloworld -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false

2.Add servlet-api dependencies in pom.xml

<dependency>
<groupId>javax.servlet</groupId>
<artifactId>servlet-api</artifactId>
<version>2.5</version>
<scope>provided</scope>
</dependency>

3.Add HelloWroldServlet.java

package com.taogen.example;

import java.io.*;
import java.util.Date;
import javax.servlet.*;
import javax.servlet.http.*;

// Extend HttpServlet class
public class HelloWorldServlet extends HttpServlet {

private String message;

public void init() throws ServletException {
// Do required initialization
message = "Hello World! ";
}

public void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {

// Set response content type
response.setContentType("text/html");

// Actual logic goes here.
PrintWriter out = response.getWriter();
out.println("<h3>" + this.message + "</h3>");
}

public void destroy() {
// do nothing.
}
}

4.Configuring servlet in web.xml

<servlet>
<servlet-name>HelloWorld</servlet-name>
<servlet-class>com.taogen.example.HelloWorldServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>HelloWorld</servlet-name>
<url-pattern>/HelloWorld</url-pattern>
</servlet-mapping>

5.Running project

Package project by mvn package, move war file to Apache Tomcat /webapps directory, starting Apache Tomcat server.

Visiting the URL http://localhost:8080/{your-servlet-context}/HelloWrold

The Servlet Interface

Servlet interface 是 Java Servlet API 的核心抽象。所有的 servlets 直接或间接的实现了这个接口。Java Servlet API 中有两个实现了 Servlet interface 的类 GenericServletHttpServlet。一般开发人员通过 extends HttpServlet 来实现它们 Servlets。

Request Handing Methods

基础的 Servlet interface 定义了 service 方法来处理客户端请求。对于 servlet container 路由到 servlet 实例的每个请求,都会调用此方法。

处理 Web 应用程序的并发请求,通常需要 Web Developer 设计可以处理多个线程执行 service 方法的servlets。

通常 Web container 通过在不同的线程上并发执行 service 方法来处理对同一 servlet 的并发请求。

HTTP Specific Request Handling Methods

Servlet 接口的抽象子类 HttpServlet 添加了额外的方法来帮助处理基于 HTTP 的请求。这些方法是:

  • doGet
  • doPost
  • doPut
  • doDelete
  • doHead
  • doOptions
  • doTrace

Number of Instances

在非分布式环境,servlet container 中每个 servlet 仅能有一个实例。如果 servlet 实现 SingleThreadModel 接口,servlet container 可能会实例化多个 servlet 实例。

SingleThreadModel 保证仅有一个线程执行 servlet 实例的 service 方法,它避免并发地访问一个 servlet 实例的 service 方法,然而,我们可以通过其它方法达到这个目的,SingleThreadModel 是不推荐使用的。

Servlet Life Cycle

Servlet 是通过定义明确的生命周期进行管理的,该生命周期定义了如何加载和实例化,如何初始化,如何处理来时客户端的请求,和如何退出服务。API 中的生命周期由 javax.servlet.Servlet 接口的 initservicedestroy 方法表示,所有的 servlet 必须直接或者通过 GenericServlet 或 HttpServlet 抽象类间接地实现这个接口。

Loading and Instantiation

Servlet container 负责加载和实例化 servlet。加载和实例化可以在 container 启动的时候,或者延迟到 container 需要 servlet 来处理请求。Servlet 默认是懒加载。

Initialization

在 servlet 对象实例化后,在它能处理客户端请求之前,container 必须初始化 servlet。初始化方便 servlet 可以读取持久性配置数据,初始化昂贵的资源以及执行其它一次性的活动。container 通过实现 ServletConfig 接口唯一(每个 Servlet 声明)对象调用 Servlet 接口的 init 方法开初始化 servlet。配置对象允许 servlet 从 Web 应用配置信息中访问 name-value 初始化参数。

Request Handling

在一个 servlet 正确初始化后,servlet container 可能使用它来处理客户端的请求。请求通过 ServletRequest 对象来表示,servlet 通过调用 ServletResponse 对象提供的方法来响应请求。这两个对象作为参数传递给Servlet 接口的 service 方法。

对于 HTTP 请求,container 提供的对象类型是 HttpServletRquest 和 HttpServletResponse。

Multithreading Issues

Servlet container 可能通过 servlet 的 service 方法发送并发请求。为了处理这些请求,Servlet 开发者必须为 service 方法中的多线程并发处理做好充分的准备。

Servlet 的 service 方法不建议使用 synchronized 关键字,因为那将使得 container 不能使用 instance pool,而是顺序执行请求,这会严重影响 servlet 的性能。

Exception During Request Handling

Servlet 在请求 service 时,可能 throw ServletException 或 UnavailableException,其中 ServletException 表示在处理请求过程中由错误发生,并且 container 应该采取适当措施来清理请求。UnavailableException 表示这个 servlet 临时或永久地不能处理请求,container 必须从服务中移除这个 servlet,调用它的 destroy 方法,并且释放 servlet 实例。

Thread Safety

request 和 response 对象的实现没有保证线程安全,这意味着它们应该在请求处理线程范围内使用。request 和 response 对象的引用不应该执行在其它线程。

End of Service

servlet container 不需要在任何特定时间内都保持 servlet 的加载。Servlet 实例在 servlet 容器中的生命周期可能是几天,几个月或者几年。

当 servlet container 决定把一个 servlet 从 service 中移除,它调用 Servlet 接口的 destroy 方法去允许这个 servlet 去释放所有它使用的和保存的任何持久状态的资源。

在 servlet container 调用destroy 方法,它必须允许任何正在执行这个 servlet 的 service 方法的线程执行完毕,或者执行超时。

一旦一个 servlet 实例的 destroy 方法被调用了,这个实例将不再接收请求。如果 container 需要这个servlet,它必须重新创建一个新的实例。在 destroy 方法完成后,servlet container 必须释放 servlet 实例,让它能够进行垃圾收集。

The Request

Request object 封装了来自客户端请求的所有信息。对于 HTTP 协议,这个信息是从客户端传递到服务器的 HTTP 请求的 headers 和 message body。

HTTP Protocol Parameters

Parameters 是由一组 name-value pair 存储的。ServletRequest 接口中获取参数的方法有:

  • getParameter
  • getParameterNames
  • getParameterValues
  • getParameterMap

来自 query string 和 post body 的数据是聚合在 request parameter set 中的。query string 数据表示在post body 数据之前。如:一个请求的 query string 是 http://xxx?a=hello,它的 post body 是 a=goodbye&a=world,参数集的结果将是 a=(hello, goodbye, world)。

Post 表单数据转换为 parameter set 的条件:

  1. 它是一个 HTTP or HTTPS 请求。
  2. HTTP method 是 POST。
  3. content type 是 application/x-www-form-urlencoded
  4. Servlet 已对请求对象上的任何 getParameter 方法族进行了初始调用。

如果上面的条件没有全部满足,post 请求的 form data 不会包含在 parameter set中,但 post data 依然可以通过 request object 的 input stream 中获取。如果所有条件都满足,post form data 将不再能从 request object 的 input stream 中读取。parameter 只能表现为 form data 和 input stream 两种方式之一。

Attributes

Attributes 是关联一个请求的对象。Container 可以设置 attributes 以表示无法通过 API 表示的信息,或者可以由 servlet 设置 attributes 以将信息传递给另一个 servlet (通过 RequestDispatcher)。ServletRequest 接口操作 attributes 的方法有:

  • getAttribute
  • getAttributeNames
  • setAttribute

一个 attribute value 只能与一个 attribute name 关联。

以 “java.” 和 “javax.” 为前缀的 attributes 是 servlet specification 定义的。

Headers

HttpServletRequest 接口获取 header 的方法有:

  • getHeader
  • getHeaders
  • getHeaderNames

可能存在多个 headers 是相同的名称,如果有多个 header 是相同的名称,getHeader 方法返回第一个 header,getHeaders 方法返回所有 headers 的 Enumeration 对象。

Headers 可能 string 表示的 int 和 Date 数据,HttpServletRequest 接口提供了直接获取 int 和 Date 类型的 header 的方法:

  • getIntHeader
  • getDateHeader

getIntHeader 方法可能会 throw NumberFormatException,getDateHeader 方法可能 throw IllegalArgumentException。

Request Path Elements

request URI = Context Path + Servlet Path + Path Info + Query String

e.g. http://myserver.com/myproject/myservlet/remove?id=1

Configuration

  • <servlet-mapping>myservlet/*</servlet-mapping>

Result

  • Context Path: /myproject
  • Servlet Path: /myservlet
  • Path Info: /remove
  • Query String: ?id=1

Path Translation Methods

Servlet API 中允许开发者获取 Java Web 应用的文件在文件系统中的绝对文件路径。这些方法是:

  • ServletContext.getRealPath
  • HttpServletRequest.getPathTranslated

Cookies

HttpServletRquest 接口提供了 getCookies 方法去获取请求中的一组 cookies。这些 cookies 是从客户端每次发送到服务端的请求中的数据。

服务端可以添加和删除 Cookie, 以及设置 cookie 的属性,如有效期等。

Internationalization

客户端可以选择向 Web server 指示它们希望以哪种语言给出响应。客户端可以使用 header 中的 Accept-Language 属性来传达这个信息。ServletRequest 接口提供了获取客户端的偏好语言的方法:

  • getLocale
  • getLocales

getLocale 方法返回客户端最希望的语言的 locale 对象。getLocales 方法返回 locale 对象的 Enumeration,以降序的方式表示客户端所有偏好的语言。

如果客户端没有指明偏好的语言,那么 getLocale 将返回 servlet container 默认的 locale,getLocales 将返回只包含 默认 locale 的 enumeration。

Request data encoding

如果客户端没有通过 Content-Type header 指明 HTTP request 的字符编码,HttpServletRequest.getCharacterEncoding 方法将返回 null。

Container 读取 request 的数据的默认编码为 ISO-8859-1。开发者可以通过 setCharacterEncoding 方法来设置 request 的字符编码。设置 request 字符编码一定要在读取 request 数据之前,一旦数据被读取了,字符编码的设置将不会生效。

Lifetime of the Request Object

每一个 request 对象仅仅在 servlet 的 service 方法或者 filter 的 doFilter 方法范围内有效。Container 为了减少创建 request 对象的性能花费,通常会循环利用 request 对象。开发者必须注意,在非有效范围之外维持 request 对象的引用是不推荐的,它可能导致意外的结果。

Servlet Context

SevletContext interface 定义了一组方法让 servlet 与 它的 servlet container 进行交流。例如,获取一个文件的 MIME type,dispatch request,写日志,以及设置和存储所有 servlet 可以访问的属性等。

Scope of ServletContext Interface

每一个 Java 虚拟机的每一个 Web 应用程序只有一个 ServletContext 对象。

Initialization Parameters

ServletContext 接口允许 servlet 访问在 deployment descriptor 定义的 context 初始化参数的方法:

  • getInitParamenter
  • getInitParamenterNames

应用程序开发人员使用初始化参数来表示设置信息。如网站管理员的电子邮件地址,或者系统的关键配置数据。

Context Attributes

Servlet 可以通过 name 把对象属性绑定到 context 中。任何绑定到 context 中的 attribute Web 应用程序中任何其他 servlet 都可以使用。ServletContext 接口操作 attributes 的方法:

  • setAttribute
  • getAttribute
  • getAttributeNames
  • removeAttribute

Resources

ServletContext interface 提供了直接访问静态类型文档,如 HTML,GIF,和JPEG 等文件的方法:

  • getResource
  • getResourceAsStream

静态文件的 path 是以‘/’开始的 context 根目录的相对路径。上面的方法不能获取动态文件如 getResource(“/index.jsp”)

Temporary Working Directories

Servlet containers 必须为 每一个 servlet context 提供一个私有的临时目录,并且可以通过 javax.servlet.context.tempdir context attribute 来访问这个目录。

The Response

Response object 封装了 server 返回给 client 的所有信息。在 HTTP 协议中,这个信息通过 HTTP headers 或 message body 从 server 传输到 client。

Buffering

出于效率目的,允许(但不是必需)servlet container 来缓冲输出到客户端的数据。一般服务器是默认缓冲的,允许 servlet 去指定 buffering 的参数。

ServletResponse interface 允许 servlet 访问和设置 buffering 信息的方法:

  • getBufferSize
  • setBufferSize
  • isCommitted
  • reset
  • resetBuffer
  • flushBuffer

ServletResponse interface 提供这些方法去执行缓冲操作,无论 servlet 使用 ServletOutput 还是 Writer。

Headers

Servlet 可以通过 HttpServletResponse interface 的方法去设置 HTTP repsose 的 headers:

  • setHeader
  • addHeader

HttpServletResponse interface 也提供了添加具体的数据类型的 headers 的方法:

  • setIntHeader
  • setDateHeader
  • addIntHeader
  • addDateHeader

Convenience Methods

HttpServletReponse interface 中的 convenience 方法有:

  • sendRedirect
  • sendError

sendRedirect 是完整的 request URI,它包含 context path 即 /contextPath/servletPath/pathInfo,而 RequsetDispather 的 forward 和 include 是相对于 context path 路径的相对目录,即 /servletPath/pathInfo

Internationalization

Servlet 可以设置 response 的 locale 和 character encoding。

Servlet 使用 ServletResponse 的 setLocale 方法可以设置 response 的 locale。如果 response 已经 committed 则 setLocale 方法是无作用的。如果 servlet 在 response committed 之前没有设置 locale,container 的默认的 locale 用于确定 response 的 locale。

Servlet 可以使用 locale encoding mapping 来设置使用特定 locale 时也使用对应的 character encoding。

<locale-encoding-mapping-list>
<locale-encoding-mapping>
<locale>zh-CN</locale>
<encoding>UTF-8</encoding>
</locale-encoding-mapping>
</locale-encoding-mapping-list>

ServletResponse 提供了设置 character encoding 的方法:

  • setCharacterEncoding
  • setContentType

和 setLocale 方法一样 set character encoding 在 response committed 之后是无作用的。如果 servlet 在 ServletResponse 的 getWriter 方法调用之前或 response committed 之后,没有设置 character encoding,则默认使用 ISO-8859-1 编码。

Closure of Response Object

当 response 关闭时,container 必须立刻 flush 在 response buffer 中所有剩余的内容,返回给 client。

关闭 request 和 response 对象的 events:

  • servlet 的 service 方法结束时。
  • 在 response 的 setContentLength 方法指定的内容量大于零,,并已写入 response 中。
  • 调用了 sendError 方法时。
  • 调用了 sendRedirect 方法时。

Lifetime of Response Object

每个 response 对象仅仅在 servlet 的 service 方法,或者在 filter 的 doFilter 方法范围内有效。Container 通常会循环利用 response 对象来减少创建 response 对象的性能消耗。开发者必须注意在 response 对象的有效范围之外维护 response 对象的参考可能会导致意外的行为。

Filtering

Filter 是 Java Servlet 组件,它允许在接受和响应请求的过程中访问和修改请求的 header 和 payload。

What is a Filter

Filter 是一段重用的代码,是一个 java class,它可以转换 HTTP request, response, 和 header 的内容。Filter 通常不会像 servlet 那样创建 response 或 响应请求,而是修改和调整对资源的请求,并修改或调整对资源的响应。

Filter 可以作用在动态或静态的内容上。动态和静态的内容一般指的是 Web resources。

开发者使用 filter 功能的类型有:

  • 在请求调用之前访问资源。
  • 在请求调用之前处理请求。
  • 通过用自定义的 request 对象 wrapping request 来修改 request 的 headers 和 data。
  • 通过用自定义的 response 对象来修改 response 的 headers 和 data。
  • 调用资源后对其进行拦截。
  • 对一个或一组 servlet 按顺序执行多个 filter 的操作。

使用 Filter components 常见的例子:

  • Authentication filters
  • Logging and auditing filters
  • Image conversion filters
  • Data compression filters
  • Encryption filters
  • Tokenizing filters
  • Filters that trigger resource access events
  • XSL/T filters that transform XML content
  • MIME-type chain filters
  • Caching filters

Main Concepts

开发者通过 implement javax.servlet.Filter interface 创建 filter,并且提供一个无参的 constructor。Filter 在 deployment descriptor 中使用 <filter> 进行声明。一个或一组 filter 可以通过在 deployment descriptor 中定义 <filter-mapping> 元素配置它的调用。这个配置通过 servlet 的 logic name 或者 resources URL 来实现。

Filter Lifecycle

在 Web 应用程序部署之后,container 接收到请求之前,container 必须找到应用于 Web 资源的 filter 列表。container 必须确保每个 filter 实例化,并且调用 init(FilterConfig config) 方法进行初始化。

每一个 filter 只有一个实例。当 container 接收请求,它传递 ServletRequestServletResponse 参数调用第一个 filter 实例的 doFilter 方法,并且 FilterChain 对象将被用于传递请求到下一个 filter。

Wrapping Requests and Responses

Filter 概念的核心是 wrapping 请求和响应,以便它可以重写行为来执行过滤任务。开发者不仅可以重写 request 和 response 对象存在的方法,还可以提供新的 API 去满足特定的过滤任务。

当一个 filter doFilter 方法被调用时,container 必须保证传递给下一个 filter 或者目标 web resource 的 request 和 response 对象与传递给当前 doFilter 方法的 request 和 response 对象是相同的。 另外,wrapper object 相同的要求也应用于从 servlet 或 filter 到 RequestDispatcher.forward or include 的调用。

Filter Environment

Filter 的初始化参数在 deployment descriptor 中<filter> 内的 <init-params> 元素中定义。Filter 通过 FilterConfiggetInitParameter 方法访问参数。另外,FilterConfig 为了加载资源,logging,存储状态到 ServletContext attribute 中等功能,它可以访问 ServletContext 对象。

Configuration of Filters in a Web Application

For example:

<filter>
<filter-name>My Filter</filter-name>
<filter-class>com.example.MyFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>My Filter</filter-name>
<servlet-name>MyServlet1</servlet-name>
<url-pattern>/foo/*</url-pattern>
</filter-mapping>

Filters and the RequestDispatcher

Java Servlet 2.4 之后可以配置 filter 在调用 request dispatcher forward() and include() 方法时过滤。使用 <dispatcher> 元素在指出过滤请求的条件,它的值有:

  • REQUEST:表示请求直接来自 client 时过滤。
  • FORWARD:表示请求在使用 RequestDispatcher forward() 时过滤。
  • INCLUDE:请求在使用 RequestDispatcher include() 时过滤。
  • ERROR:请求转到 error resource 时过滤。
  • 以上多个值的组合

<dispatcher> 元素的使用示例,如下:

<filter-mapping>
<filter-name>Logging Filter</filter-name>
<url-pattern>/products/*</url-pattern>
<dispatcher>FORWARD</dispatcher>
<dispatcher>REQUEST</dispatcher>
</filter-mapping>

Sessions

HTTP 是一种 stateless 协议。为了构建有效的 Web 应用程序,将来自特定 client 的 request 彼此关联是必要的。Servlet specification 定义了简单的 HttpSession interface,它允许 servlet container 使用多种方法中的一种来 track 用户的 session,而无需让开发者介入方法之间的细微差别。

Session Tracking Mechanisms

Cookies

Session 通过 HTTP cookies 来跟踪会话是最常用使用的一种 session tracking mechanism 。Container 发送 cookie 给 client,client 将在接下来的访问 server 请求中发送这个 cookie,明确地将请求与会话关联。session tracking 的 cookie 的名称必须是 JSESSIONID。

URL Rewriting

URL Rewriting 是 session tracking 的另一种方法。当 client 不接受 cookie,可以使用 URL rewriting 的方式去实现 session tracking。URL rewriting 涉及到添加 session ID 到 URL path 中,container 解析该 URL,并将这个 request 与一个 session 进行关联。URL rewriting 的 URL 例子如下:

http://www.myserver.com/index.html;jsessionid=1234

只有当 client 不接受 cookie 时,URL rewriting 才会有效果。当 client 可以接受 cookie 时,URL rewiring 的 URL 不会被 container 解析,以及不会将 request 与 session 关联。

Creating Session

如果一个 session 只是一个预期的 session 而尚未建立,则认为该 session 是新的。因为 HTTP 是基于request-response 的协议,所以 HTTP session 被认为是新的,直到客户端 “join” 它为止。当 session 跟踪信息已返回到服务器以表明已建立会话时,客户端将 join 该 session。 在客户端 join session 之前,不能假定客户端的下一个请求将被识别为 session 的一部分。

Session Scope

HttpSession 对象必须是 application (or servlet context) level。不同 context 可以有相同的建立 session 的 cookie,但是 container 不能在不同的 context 之间共享这些 session 对象。

Binding Attributes into a Session

Servlet 可以通过一个 name 将一个 object attribute 绑定到 HttpSession 中。绑定在 session 中的 object 对于任何其他属于相同的 ServletContext 的 servlet 并且属于同一个会话的请求都是可以使用的 。

Session Timeouts

在 HTTP protocol 中,当 client 不在活跃时没有明确的结束信号。这意味着只有 timeout period mechanism 可以表明 client 不在活跃。

Session 的默认 timeout period 在 servlet container 中定义,它可以通过 HttpSession 接口的 getMaxInactiveInterval 方法获取。开发者可以通过 HttpSession 的 setMaxInactiveInterval 方法改变这个 timeout。这些方法的 timeout period 是以秒为单位来定义的。如果 session 的 timeout period 设置为-1,这个 session 将永远不会过期。

Important Session Semantics

Threading Issues

多个 servlet 执行请求线程可以同时访问同一个 session object。访问 session object 应该是 synchronized,开发者负责适当地 synchronizing 访问 session resources。

Client Semantics

由于 cookie 或 SSL certificates 一般是被 Web browser 控制的,它们没有与特定的浏览器窗口关联的,来自所有从 client 浏览器窗口到 servlet container 的请求可能是同一个 session。为了获得最大的可移植性,开发者应该始终假定 client 的所有窗口都参与同一个 session。

Dispatching Requests

当构建一个 Web 应用程序,将请求的处理 forward 到另一个 servlet 或 include 另一个 servlet 在 response 中的输出通常很有用。RequestDispatcher interface 提供了一种机制去实现它。

Obtaining a RequestDispatcher

获取 RequestDispatcher 接口的对象可以通过 ServletContext 接口的以下方法:

  • getReqeustDispatcher
  • getNameDispatcher

getRequestDispathcer 方法使用一个在 ServletContext 范围的描述路径的 String 参数。这个路径是相对 ServletContext 根目录的和以 “/” 开头的相对路径。这个方法使用这个 path 去查询一个 servlet。

Using a Request Dispatcher

使用 request dispatcher,servlet 调用 ReqeustDispatcher 接口的 include 或 forward 方法。这些方法的参数可以是传递给 javax.servlet 接口 service 方法的 request 和 response 参数或者是 request 和 response wrapper classes 的子类的实例。container 应该确保将 request 分发到目标 servlet 发生在和原始请求相同的 JVM 虚拟的同一线程。

The Include Method

RequestDispatcher 接口的 include 方法可能在任何时候被调用。include 方法的目标 servlet 可以访问 request 对象的所有方面,但是它使用 response 对象是有很多限制的。

include 的目标 servlet 仅仅可以把信息写入 response 对象的 ServletOutputStream 或者 Writer。它不能设置 header 或调用任何影响 header 的方法,调用 HttpServletRequest.getSession() 方法会抛出 IllegalStateException 异常。

Included Request Parameters

Servlet 使用 requestDispathcer 的 include 方法,以下 request attributes 是必须设置的:

  • javax.servlet.include.request_uri
  • javax.servlet.include.context_path
  • javax.servlet.include.servlet_path
  • javax.servlet.include.path_info
  • javax.servlet.include.query_string

The Forward Method

调用 RequestDispatcher 的 forward 方法必须仅仅在 server 没有内容提交给给 client。如果在 response buffer 中有没有提交的输出数据,在目标的 servlet 的 service 方法调用之前,这些内容必须是清空的。如果 response 已经提交了,必须抛出 IllegalStateException。

Servlet 使用 requestDispathcer 的 forward 方法,以下 request attributes 是必须设置的:

  • javax.servlet.forward.request_uri
  • javax.servlet.forward.context_path
  • javax.servlet.forward.servlet_path
  • javax.servlet.forward.path_info
  • javax.servlet.forward.query_string

Error Handling

如果 request dispatcher 的目标 servlet 抛出 runtime exception 或者 Servlet Exception or IOException checked exception,它应该传播到 calling servlet。所有其他的异常应该包装成 ServletException,并且 root cause of exception 设置为 original exception,因为不应该传播该异常。

Web Applications

Web application 是由 servlets,HTML pages,classes 和 其他资源的集合。Web 应用程序可以在不同供应商的 container 中运行。

Web Applications Within Web Servers

Web 应用程序根植于 Web server 的特定路径。例如,catalog 应用程序应该通过 http://www.mycrop.com/catalog 定位到。所有以这个前缀开头的请求都将被路由到表示 catalog 应用程序的 ServletContext。

Relationship to ServletContext

Servlet container 必须强制一个 Web 应用程序与一个 ServletContext 一一对应。ServletContext 对象为 servlet 提供了其应用程序视角。

Elements of a Web Application

一个 Web application 可能有以下内容组成:

  • Servlets
  • JSP Pages
  • Utility Classes
  • Static documents(HTML,images,sounds,etc)
  • Client side Java applets,beans,and classes
  • Descriptive meta information that ties all of the above elements together

Directory Structure

Web 应用程序是作为结构化目录的层次结构存在的。在应用程序的层次结构中有一个特殊的目录为 WEB-INF,这个目录包含与应用程序相关的不在应用程序的文档 root 目录中的文件。WEB-INF 节点不是应用程序的公开文档树中的一部分。容器不能将 WEB-INF 目录中包含的文件直接提供给 client。WEB-INF 目录的内容是对 ServletContext getResource 和 getResourceAsStream 方法是可见的,以及使用 RequestDispatcher 调用。如果应用开发者需要访问如应用的配置信息文件,但不希望把它直接暴露给 Client,可以把它们放到 WEB-INF 目录下。任何访问 WEB-INF 目录资源的请求将返回 404。WEB-INF 目录的内容有:

  • /WEB-INF/web.xml: deployment descriptor
  • /WEB-INF/classes/: servlet 和 uitlity classes.
  • /WEB-INF/lib/*.jar: Java Archive files. These files contains servlets, beans, and other utility classes.

Web Application Archive File

可以使用标准的 Java archive tools 将 Web 应用程序打包并签名为 Web ARchive format (WAR) file。WAR 文件中的 META-INF 目录包含了对 Java archive tools 有用的信息。这个目录不能直接被 client 访问。

Web Application Deployment Descriptor

Web 应用程序 deployment descriptor 包含一下配置和部署信息的类型:

  • ServletContext Init Paramenters
  • Session Configuration
  • Servlet/JSP Definitions
  • Servlet/JSP Mappings
  • Welcome File list
  • Error Pages
  • Security

Dependencies On Extensions

当大量的应用程序使用相同的 code 或 resources,它们通常将作为库文件安装在 servlet container 中。这些文件一般是常用的或标准的 API,使用它们不会牺牲可移植性。Servlet container 必须为这些 libraries 提供一个目录。位于这个目录的文件必须可用被所有 Web 应用程序使用。该目录位置是特定于 container 的。

应用程序开发者依赖的扩展必须在 WAR 文件中提供 META-INF/MANIFEST.MF entry,其中列出了 WAR 需要的所有扩展。Manifest entry 的格式应该遵循标准的 JAR manifest 格式。

Web container 必须也能够识别在 WAR 中 WEB-INF/lib 条目下的任何 library JARs 的 manifest entry 中表达的声明的依赖。

Error Handling

当异常发生在 servlet 或 JSP page,下面的属性必须设置的:

  • javax.servlet.error.status_code (java.lang.Integer)
  • javax.servlet.error.exception_type (java.lang.Class)
  • javax.servlet.error.message (java.lang.String)
  • javax.servlet.error.exception (java.lang.Throwable)
  • javax.servlet.error.request_uri (java.lang.String)
  • javax.servlet.error.servlet_name (java.lang.String)

这些属性允许 servlet 去生成特定的内容。

Error Pages

为了允许开发者可以去自定义 servlet 出现 error 时返回给 Web client 的内容,deployment descriptor 定义了 error page 描述的列表。这个语法允许当 servlet 或 filter 调用 HttpResponse.sendError 返回特定的 status code,或者 servlet 产生的 exception 或 error 传播到了 container 时,container 将返回资源配置。

如果在 response 上调用了 sendError 方法,container 查询使用 status-code 语法声明的 Web 应用程序的 error page 列表,并尝试进行匹配。如果匹配成功,container 返回通过 location entry 指定的资源。error page 声明的例子:

<error-page>
<error-code>404</error-code>
<!-- /src/main/webapp/error-404.html-->
<location>/error-404.html</location>
</error-page>
<error-page>
<exception-type>java.lang.Exception</exception-type>
<location>/errorHandler</location>
</error-page>

Web Application Deployment

当一个 Web 应用程序部署到 container 中,在 Web 应用程序开始处理 client 请求之前,接下来的步骤是必须执行的:

  • 实例化每一个在 deployment descriptor 中 <listener> 元素定义的 listener 的实例。
  • 为了实例化实现了 ServletContextListener 的 Listener 实例,调用它的 contextInitialized() 方法。
  • 实例化每个在 deployment descript 中的 <filter> 元素定义的 filter 的实例,并且调用它的 init() 方法进行初始化。
  • 实例化每个包含 <load-on-startup><servlet> 元素定义的 servlet 实例,并且调用它的 init() 方法进行初始化。

Inclusion of a web.xml Deployment Descriptor

如果 Web 应用程序不包含任何 servlet,filter,或 listener 组件,这个应用程序不需要包含 web.xml。换句话说,一个仅仅包含静态文件或 JSP page 的Web 应用程序不需要出现 web.xml。

Application Lifecycle Events

应用程序 event 功能使开发者能够更好地控制 ServletContext,HttpSession 和 ServletReqeust 的生命周期,实现更好的代码分解,并提高管理 Web 应用程序使用的资源的效率。

Event Listener

应用程序的 event listener 是实现一个或多个 servlet event listener 接口的 class。在部署 Web 应用程序时,将实例化它们并将其注册在 Web container 中。

Servlet event listener 支持 ServletContext,HttpSession 和 ServletRequest 对象中状态改变的事件通知。每种事件类型有多种 listener class。开发者可以指定 container 对每种事件类型调用 listener bean 的顺序。

Event Types and Listener Interfaces

  • ServletContextListener
  • ServletContextAttributeListener
  • HttpSessionListener
  • HttpSessionAttributeListener
  • HttpSessionActivationListener
  • HttpSessionBindingListener
  • ServletRequestListener
  • ServletRequestAttributeListener

Deployment Descriptor Example

<web-app>
<display-name>MyListeningApplication</display-name>
<listener>
<listener-class>com.acme.MyConnectionManager</listener-class>
</listener>
<listener>
<listener-class>com.acme.MyLoggingModule</listener-class>
</listener>
<servlet>
<display-name>RegistrationServlet</display-name>
...etc
</servlet>
</web-app>

Mapping Requests to Servlets

Use of URL Paths

URL path 映射规则使用下面的顺序,当成功匹配后不再继续往下匹配:

  • Container 尝试查找请求路径与 servlet path 的精准匹配。
  • Container 尝试循环的匹配 longest path-prefix。这是通过使用 '/' 字符作为路径分隔符来一次降低目录树的路径来完成的。
  • 如果 URL path 最后部分包含扩展名,如 .jsp,servlet container 将尝试匹配处理该扩展名请求的 servlet。
  • 如果上面的三个规则都没有匹配到 servlet,container 将尝试提供适合于所请求资源的内容。如果应用程序定义了默认的 servlet,则将使用它。

Specification of Mappings

在 Web 应用程序的 deployment descriptor 中,使用以下语法去定义 mappings:

  • / 字符开头,以 /* 字符结尾的字符串。
  • *. 开头作为扩展映射的字符串
  • 仅包含 / 字符表明应用程序的默认 servlet。
  • 其它仅仅精准匹配的字符串。

Mapping Set Example

  • /foo/bar/*
  • /baz/*
  • /catalog
  • *.bop

Security

Servlet 的 deployment descriptor 中的声明可以设置应用程序的安全。Web 应用程序包含许多用户可以访问的资源。这些资源通常是暴露在不受保护的开放网络,例如 Internet 环境中,大量的 Web 应用程序有安全性需求。Servlet container 具有满足这些要求的机制和基础结构,它们具有一下这些特征:

  • Authentication: 通信实体互相证明其代表授权访问的特定身份。
  • Access control for resources: 与资源进行交互的手段仅限于用户或程序的集合,以加强完整性,机密性或可用性约束。
  • Data Integrity: 信息在传输的过程中不能被第三方修改。
  • Confidentiality or Data Privacy: 信息仅仅对授权访问的用户可用。

Implementing Security in Servlet

在 Web 应用程序的 Deployment Descriptor 中配置 security:

  • <security-role>: Defining roles.
  • <login-config>: Defining how to authenticate user. e.g. login by username and password in login page.
  • <security-constraint>: Defining constrained resources URI, HTTP methods, constraint role, data constraint type.

在 Apache Tomcat server 的 conf/tomcat-user.xml 文件中配置合法的角色、用户和密码。

  • <role> and <user>

使用基于自定义表单或者默认的弹框方式进行安全认证。

References

[1] Java Servlet 2.5 Specification

[2] Java servlet - Wikipedia

本篇是将介绍如何更好地编程,如何编写整洁的代码,以及如何将 bad code 转换为 good code。

Clean Code

What is Clean Code

clean code 应该具有以下特点:

  • 整洁性、可读性。1)简单明了,清晰地表达代码的意图。2)便于阅读,有意义的命名。3)统一的良好代码格式。
  • 可维护性、可扩展性。1)单一职责,每个类和方法做好一件事。2)开闭原则。3)没有重复。4)最小化依赖。
  • 健壮性。1)错误处理。2)单元测试完全覆盖,无潜在 bug。
  • 高效性。性能尽可能最优。

Why do We Need Clean Code

代码是需求的最终表达形式,实现一个软件我们需要编写正确、整洁的代码,使其高效地在机器上运行,以及便于扩展和维护。

编写 bad code 可能会导致严重的后果。bad code 会使得团队的生产力持续降低。bad code 可能导致软件无法维护,以及存在大量 bug,最终导致软件无法正常运行。最后,不得不重新设计代码,建立一个新的团队重构旧的系统,以及跟进旧系统的改变。

产生 bad code 的原因

产生 bad code 的原因有很多,如:1)需求的改变。2)时间规划太紧,没有足够的时间去做得更好。3)对一个程序很疲倦,想要早点结束。4)手上堆积了很多其它的事,想要赶紧做完它,然后做其它事情。5)管理者的管理不当。…

作为程序开发者,我们不能一味地抱怨外部原因,我们更应该反思我们自己,我们能否做得更好。1)当软件的时间规划和安排不合理时,我们应该及时的反馈我们的想法。大部分管理者想要看到事实,以及想要 good code。2)在开发过程中,我们应该保持专业的态度,持续保证代码的整洁,就像医生做手术前要洗手一样,多花一点时间保持代码的整洁,让保证代码不会变成 bad code。唯一保证 deadline 的方法就是在任何时候尽可能地保持代码地整洁,为了加快速度编写混乱的代码最终会拖垮你的进度。

Meaningful Names

命名在软件开发中无处不在。我们需要为变量、方法、参数、类、包和项目文件目录等命名。我们需要大量的命名,因此我们应该把它做得更好。这一部分,我们将介绍好的命名规则,以及尽力避免的不好的命名规则。

Rules for Creating Good Names

选择一个好的名称是需要时间的,但它节省的时间大于它的花费。持续关注代码中名称,当你发现更好的名称时立刻改变它。

Use Intention-Revealing Names

一个好的名称应该告诉我们它为什么存在,它能做什么,它如何使用。如果一个名称需要注释说明,然么这个名称没有展现它的意图。使用揭示意图的名称更容易理解和改变代码。示例如下:

错误写法

int d; // elapsed time in days

正确写法

int elapsedTimeInDays;
int daysSinceCretation;

揭示意图常用的方法:

  • 使用揭示意图的名称。
  • 使用静态变量代替直接量。
  • 使用对象封装一组数据。

Make Meaningful Distinction

如果名称必须不同,那么它们一定存在某些不同。有意义的区分不同的名称,可以让读者清晰地知道它们之间的不同。

常见无意义区分:

  • 使用数字序列命令 a1, a2, .. aN。如 copyChars(char a1[], char a2[]) => copyChars(char source[], char destination[])
  • 使用同义词。如 1)info 和 data,ProductInfo 和 Productdata。2)a, an, the,zork 和 theZork。

Use Pronounceable Names

我们大脑更习惯处理可发音的语言。可发音的名称更容易与他人交流和讨论。命名时尽量使用完整的单词的组合。示例如下:

错误的写法

class DtaRcrd102{
private Date genymdhms;
private Date modymdhms;
private final String pszqint = "102";
}

正确的写法

class Customer {
private Date generationTimestamp;
private Date modificationTimestamp;
private final String recordId = "102";
}

Use Searchable Names

单字母的名称和数字常量等名称很难在文本中定位。尽量不要使用单字母命名和直接使用常量。

Class Names

Classes 和 objects 的名称应该为名词或名词短语。如 Customer,WikiPage,Account,AddressParser 等。

Method Names

方法名称应该为动词或者动词短语。如 postPayment,deletePage,save 等。

Pick One Word per Concept

为一个抽象概念选择一个词,并且坚持使用它。例如,fetch,retrieve,get 等相似含义的名称用在不同的类中功能相似的方法中,会使人感到困惑。类似还有 controller,manager 和 driver。同一个概念应该坚持使用同一个单词。

Use Solution Domain Names and Problem Domain Names

阅读代码的人一定是程序员,我们应该坚持使用计算机术语。计算机术语大部分程序员都比较熟悉,遇到不清楚的术语,可以在网上快速查询得到结果,不必与作者沟通。

Add Meaningful Context

增加有意义的 Context 让读者能清晰直观地知道这个变量是某个类的一部分。

如 firstName,lastName,street,houseNumber,city,state,zipcode 等变量是 Address 类中的成员。其中 state 不能直观的看出它是 adderss 的一部分。我们可以添加前缀来增加 context。state => addressState。

Avoid Creating Bad Names

Avoid Disinformation

我们应该避免留下误解代码含义的错误线索。

  • 避免单词缩写。如, hypotenuse 缩写成 hp。
  • 集合对象不确定数据结构时,应避免添加固定的类型后缀。如,accountList => accountGroup, bunchOfAccounts, or accounts.

Avoid Encodings

编码的名称增加解码的负担。编码名称一般很少是可发音的,以及它容易误解类型。尽量避免使用编码的名称。

常见的编码命名方式:

  • 匈牙利表示法(Hungarian Notation)变量名以一个或多个小写字母开头,代表变量的类型。后面附以变量的名字。这种方案允许每个变量都附有表示变量类型的信息。如 int[] aNamesboolean bDeleted
  • 成员前缀 m_。
  • 接口和实现。使用 Ixxx 表示接口,如 IShapeFactory。

Avoid Mental Mapping

避免思维映射,即避免使读者将名称翻译为它们已经知道的概念。如设计模式名称、算法名称等等。在命名时我们使用问题领域术语和解决方案领域术语。

Don’t Be Cute

如果名称太机灵,它们将仅能被作者分享过它的含义的人记住。如 HolyHandGrenade 实际上表示 DeleteItems,whack() 实际表示 kill(),eatMyShorts() 表示 abort()。 尽量避免这种命名方式。

Don’t Pun

避免使用相同的词表示两个目的。使用相同的术语表示两个不同的想法是一语双关的,让人不易理解,产生误解。如,一个 add 方法用于将两个值相加或连接。然而,另一个 add 方法则表示将元素添加到集合对象中。我们不应该一词双关,我们可以使用 insert 或 append 代替 add。

Don’t Add Gratuitous Context

不要增加不必要的 context。如,为某个模块的所有 classes 添加前缀标识。

有意义的命名总结

单个命名应该:揭示意图,可发音,可搜索,一致性,使用专业术语,增加有意义的 context。

多个命名之间应该:可区分。

有意义的命名的实现目标:1)整洁性、可维护性。名称揭示意图,代码解释一切,无注释。2)可读性。代码命名简单、明确,让人更轻松、更快地阅读和理解代码。

Functions

Small

function 的第一规则:它们应该非常小。

  1. Blocks and Indenting。if..else,while 等代码块应该占一行,作为方法调用。代码块作为方法调用可以有一个很好的描述名称。

  2. indent level 不应该超过一层或两层。这使得 functions 更容易阅读和理解。

Do One Thing

Functions 应该做一件事,它们应该做好,它们应该只做一件事。

如何知道是否是只做一件事:

  • function 只有一层抽象,即不包含多个不同的抽象层级。
  • function 不能被划分多个 sections。

Switch Statements

让 switch 语句只做一件事是困难的,根据它的性质,switch 语句总是做 N 件事。我们可以将 switch 的每一个分支封装成一个方法。

Use Descriptive Names

functions 的名称应该描述 function 做了什么。不要害怕使得名称很长,一个长的 descriptive name 比短的神秘的名称更好。一个长的 descriptive name 比一个长的 comment 更好。

Function Arguments

理想的 function 参数的个数是0个,其次是1个和2个,3个参数是尽量避免的。

多个参数的问题:

  • 参数需要占用额外的概念理解的精力,每次都需要解释它。
  • 参数越多越难测试。

其它参数问题

  1. Flag 参数。传递一个 boolean 参数给方法是很糟糕的,你明显的表现出这个 function 不是做一件事。
  2. 参数对象。超过两个或三个参数应该封装成一个 class。
  3. 动词和关键词。方法名应该是动词+名词对。如使用 writeField() 代替 write()
  4. 输出参数。输出参数比输入参数是更难理解的。我们既要关注输入是什么,也要关注输出是什么,它使我们花费双倍的精力。如果你的 function 必须改变某些状态,让它改变其拥有对象的状态。如 appendFooter(String report) => report.appendFooter()

Have No Side Effects

你的 function 保证了做一件事,但可能存在其它隐藏的事情。示例如下:

public boolean checkPassword(String userName, String password){
User user = UserGateway.findByName(userName);
if (user != User.NULL){
if (user.getPassword.equals(password)){
Session.initialize();
return ture;
}
}
return false;
}

上面的 checkPassword() 方法中的 Session.initialize() 是隐藏的事情。因为初始化会话这件事不属于检查密码。可以将方法改为 checkPasswordAndInitializeSession(),其实现如下:

public boolean checkPasswordAndInitializeSession(String userName, String password){
boolean passwordRight = checkPassword(userName, password);
if (passwordRight){
Session.initialize();
return true;
}
return false;
}

Command Query Separation

Functions 应该是做某些事或者回答某些事,而不是两者都做。你的 function 应该改变一个对象的状态或者返回一个对象的一些信息。一个错误的例子如下:

public boolean set(String attribute, String value);

上面这个方法即设置了对象的属性,又返回操作是否成功和属性是否存在。这会导致出现奇怪的语句,如 if (set("username", "unclebob"))

上面的方法应该划分为两个方法,一个执行操作,一个查询状态。如下:

public void set(String attribute, String value);
public boolean attributeExists(String attribute);

Prefer Exception to Returning Error Codes

执行命令的方法返回 error codes 轻微地违反了 command query separation 原则。当你返回一个 error code,调用者必须立刻处理 error,才能继续执行其它语句,这将导致代码很多层嵌套判断。示例代码如下:

if (deletePage(page) == E_OK){
if (registry.deleteReference(page.name) == E_OK){
if (configKeys.deleteKey(page.name.makeKey()) == E_OK){
...
} else {

}
}
}

你可以使用 exception 代替返回 error codes,这样的话,不需要立即处理错误情况,代码没有太多的嵌套,是代码更整洁。代码如下:

try {
deletePage(page);
registry.deleteReference(page.name);
configKeys.deleteKey(page.name.makeKey());
}
catch (Exception e) {
logger.log(e.getMessage());
}

Extract Try/Catch Blocks

Try/catch 代码块是丑陋的,它混合了正常代码和错误处理代码。更好的办法是将 try/catch 代码块封装到一个独立的方法。上面的代码可以改为:

public void delete(Page page){
try {
deletePageAndAllReference(page);
} catch (Exception e){
logError(e);
}
}
public void deletePageAndAllReferences(Page page) throws Exception {
deletePage(page);
registry.deleteReference(page.name);
configKeys.deleteKey(page.name.makeKey());
}

private void logError(Exception e){
logger.log(e.getMessage());
}

Error Handling Is One Thing

Functions 应该做一件事,Error handling 它是一件事,所以,一个处理 errors 的 function 不应该做其它的事。即在 catch/finally 代码块后面不应有其它的语句,或一个处理 error 的 function 仅包含 try/catch/finally 代码块。

Don’t Repeat Yourself

在软件中重复是所有罪恶的根源。我们应该尽可能的消除重复的代码。

Clean functions 的总结

一个 function 应该尽量小,只做一件事,尽量少的参数,和使用描述性的名称。

如何只做一件事:只有一层抽象,不能划分为多个 sections,没有隐藏功能,命令和查询分离。

其它问题:错误处理,消除重复。

一个专业的程序员把系统当作故事来讲诉,而不是编写程序。

Comments

Don’t comment bad code–rewrite it.

注释的正确用法是为了弥补我们在代码中失败地表达自己。当你需要写注释时,你需要思考它有没有方式在代码中表达。每次你写了一个注释,你应该感到你表达能力的失败。

为什么尽量不写注释,因为程序员很少去维护注释的正确性。

  • 注释不能弥补差的代码。与其花时间去写注释解释混乱的代码,不如花时间使代码整洁。

  • 在代码中解释你自己。在很多时候可以创建一个方法来代替你注释要说明的。如下:

    // Check to see if the employee is eligible for full benefits
    if ((employee.flog & HOURLY_FLAG) && (employee.age > 65))

    代替为

    if (employee.isEligibleForFullBenefits())

Good Comments

有些注释时有必要的或有益的。始终记住真正好的注释是找到一种方式不写注释。

常见的有必要的注释

  • Legal Comments。版权和作者声明是有必要的和合理的。如 // Copyright (C) 2003,2004,2005 by Object Mentor, Inc. All rights reserved.
  • Explanation of Intent。解释为什么这么做。如 // we are greater because we are the right type.
  • Warning of Consequence。警告注释。如 // Don't run unless you, have some time to kill
  • TODO Comments。TODO 一般用于此刻没时间做或者暂时做不了的事。它解释代码的问题和未来应该做什么。如 //TODO-MdM these are not needed // We expect this to go away when we do the checkout model
  • Amplification。详述注释。用于详述某些重要的事情,其他人可能认为不重要的事。// the trim is real important. It removes the starting spaces that could cause the item to be recognized as another list.

Formatting

Code formatting 是重要的,编码风格和可读性会持续影响软件的维护性和扩展性。

Vertical Formatting

  1. 行数。一个源码文件不应该超过 500 行,一般最好 200 行左右。小文件通常比大文件更容易理解。
  2. 自顶向下阅读。最上面的方法提供高层次的概念和算法。细节应该是从上到下递增的。
  3. 垂直的间隔。不同概念的一组代码应该用空行分隔。
  4. 垂直的密度。垂直密度暗示了紧密的关系,相关性高得代码应该紧密排布在一起。
  5. 垂直的距离。紧密关联的概念应该保持垂直方向接近的。1)变量声明语句应该尽可能接近它们使用的地方。2)实例变量应该声明在 class 的顶端。3)依赖的方法。一个方法调用另一个方法,它们应该在垂直方向相互接近的,调用者应该在被调者的上面。4)概念上类似。相似的内容应该在垂直方向相互接近的。如方法的重载,相同的方法名,不同的参数的方法,它们的功能是类似的。
  6. 垂直的顺序。方法的调用依赖时指向下面方向的。

Horizontal Formatting

  1. 水平的字符数应该不超过 120 个字符。
  2. 水平的间隔和密度。操作符与操作数之间有一个空格,将它们分成左边和右边两个部分,看起来更清晰。
  3. 缩进。使用缩进来表示代码语句范围的层级结构。

Team Rules

每一个程序员都有自己最喜欢的格式规则,但是如果在一个团队中工作,我们需要按照团队的规则去编写代码。一个团队的所有开发者应该同意一个编码风格,然后每个团队成员应该使用这个风格。我们需要一个统一的编码风格。

Error Handling

错误处理时必须要做的一件事。输入可能异常,设备可能失败,总的来说,事情可能发生错误,程序员有责任确保当发生异常时系统可以正常工作。

  1. 使用异常处理而不是返回值。使用异常处理比返回错误码更整洁。返回错误码需要立刻处理,会导致多层嵌套的复杂结构。
  2. 尝试编写强制异常的测试。
  3. 使用 Unchecked Exceptions。Checked exceptions 违反了 Open/Closed 原则。如果你抛出一个 checked exception,你需要在你 catch 和 throw 之间的所有方法上声明异常。
  4. 根据调用者的需求定义异常类。我们定义异常类最重要是它们是怎么被捕获的。通常单个异常是较好的。我们可以简化 API 将多个异常包装成一个异常。
  5. 不要用异常处理 try/catch 来做业务流程处理。即 catch 代码块中不应该出现业务代码。
  6. 不要返回 Null。返回 null,给调用者造成额外的负担,null 会导致很多 null 检查代码。如果尝试返回 null 时,可以考虑抛出异常或者返回特殊对象代替。如 Collections.emptyList()
  7. 不要传递 Null 作为参数。

Unit Test

单元测试保证了代码的每一个功能都是我们所期待的结果。

Keeping Tests Clean

混乱的测试代码比没有测试更糟糕。测试代码必须随着生产代码的发展而改变,混乱的测试代码很难去改变,测试代码越来越多,越来越混乱,最终整个测试代码会被弃用。

没有测试代码,不能保证代码的改变是正常工作的,不能保证改变系统的一部分没有破环系统的其他部分。所以系统的缺陷率开始上升。他们停止清洁和整理生产代码,因为他们害怕改变会影响系统正常工作。他们的生产代码开始腐烂变坏。

测试代码和生产代码是一样重要的。它需要思考,设计和关注。它必须保持和生产代码一样整洁。

单元测试让我们的代码是灵活的、可维护的和可重用的。因为如果你有测试,你就不害怕改变代码,你可以不断地优化你的代码。

Clean Tests

使测试代码整洁的方法:可读性。让测试方法和正常方法一样整洁,使它尽量的小,只做一件事,只有一层抽象,和使用描述性的名称等。

Single Concept per Test

每个测试方法应该只测试单个概念。

F.I.R.S.T

编写整洁测试代码的 5 个规则:

  • Fast。测试应该是快的。如果测试运行很慢,你不想频繁地运行它们。如果你没有频繁地运行测试,你不会更早地发现问题,以及更轻松地修复问题。
  • Indenpent。测试方法应该不依靠其它代码。你应该能够独立地测试,以及以任何顺序运行测试。
  • Repeatable。测试应该在任何环境重复执行。如生产环境、QA 环境,和本地环境等。
  • Self-Validating。测试应该有一个布尔输出。你可以不通过日志文件知道测试是否通过。
  • Timely 。测试代码应该在业务代码之前写。

Classes

Class Organization

标准的 Java convention,一个 class 由一组变量开始,首先是 pubilc static constants,然后是 private static 变量,最后是 instance 变量。变量后面是方法,方法按照抽象层次由高往下。

Class Should Be Samll

class 的第一规则是 class should be small。class 的名称应该说明其应履行的职责。我们应该能够不使用 “if”, “and”, “or”, “but” 等单词情况下,用 25 words 来写出 class 的简单的描述。

Single Responsibility Principle 表示一个 class 或 module 应该只有一个发生改变的原因。这个原则帮助我们进行职责的定义和对 class size 的指导。尝试识别职责(reasons to change)常常能帮助我们在代码中找到和创建更好的抽象。

Cohension。classes 应该有少量的实例变量。一般更多的变量被一个方法操作,这个方法对 class 来说越内聚。当一个 classes 不内聚,应该分离它们。

Organizing for Change

大部分系统是不断改变的。每一个改变都可能使得系统的其它部分不能正常工作的风险。我们可以阻止我们的 classes 减少改变的风险。下面这些原则可以帮助我们更好地组织 classes 以减少代码改变带来的风险:SRP(Single Responsibility Principle),Open-Closed Principle(OCP),Dependency Inversion Principle(DIP)

Summary

Most Important

  • Meaningful Names
  • SRP(Single Responsibility Principle)

References

[1] Clean Code by Robert C. Martin

0%