When I execute a SQL script of the dump of a database structure and data, occurs an error “The user specified as a definer 'xxx'@'%' does not exist”.
Error Info
SQL Error (1449):The user specified as a definer ('xxx'@'%') does not exist
Solutions
This commonly occurs when exporting views/triggers/procedures from one database or server to another as the user that created that object no longer exists.
For example, the following is a trigger create statement:
CREATE DEFINER=`not_exist_user` TRIGGER your_trigger BEFORE INSERTON your_table FOREACHROWSET new.create_time=NOW();
Solution 1: Change the DEFINER
This is possibly easiest to do when initially importing your database objects, by removing any DEFINER statements DEFINER=some_user from the dump.
Changing the definer later is a more little tricky. You can search solutions for “How to change the definer for views/triggers/procedures”.
Solution 2: Create the missing user
If you’ve found following error while using MySQL database:
The user specified as a definer ('some_user'@'%') does not exist`
Then you can solve it by using following :
CREATEUSER'some_user'@'%' IDENTIFIED BY'complex-password'; GRANTALLON*.*TO'some_user'@'%' IDENTIFIED BY'complex-password'; /* or GRANT ALL ON *.* TO 'some_user'@'%'; */ FLUSH PRIVILEGES;
Reasons
My exported trigger has a definer user that does not exist.
When you insert data into the table used in the trigger, MySQL will occur the error “The user specified as a definer xxx does not exist”.
The default maximum body size of a client request, or maximum file size, that Nginx allows you to have, is 1M. So when you try to upload something larger than 1M, you get the following error: 413: Request Entity Too Large.
When over the max upload file size
When uploading a file over max size, Nginx returns
status code: 413 Request Entity Too Large
Content-Type: text/html
response body:
<html> <head><title>413 Request Entity Too Large</title></head> <body> <center><h1>413 Request Entity Too Large</h1></center> <hr><center>nginx/1.18.0</center> </body> </html>
Solutions
Add the following settings to your Nginx configuration file nginx.conf
max-file-size specifies the maximum size permitted for uploaded files. The default is 1MB
max-request-size specifies the maximum size allowed for multipart/form-data requests. The default is 10MB.
When over the max upload file size
The Java web project will throw the IllegalStateException
- UT005023: Exception handling request to /file/uploadFile java.lang.IllegalStateException: io.undertow.server.handlers.form.MultiPartParserDefinition$FileTooLargeException: UT000054: The maximum size 1048576 for an individual file in a multipart request was exceeded at io.undertow.servlet.spec.HttpServletRequestImpl.parseFormData(HttpServletRequestImpl.java:847) at io.undertow.servlet.spec.HttpServletRequestImpl.getParameter(HttpServletRequestImpl.java:722) at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:85) ...
Solutions
Add the following settings in your spring boot configuration file application.yml
spring: servlet: multipart: # max single file size max-file-size:100MB # max request size max-request-size:200MB
Calling backend API, the status code of the response is 500, but the backend is not throw exceptions. The HTTP response message is “Proxy error: Could not proxy request”.
Error Info
Proxy error: Could not proxy request /captchaImage from localhost:8070 to http://10.0.0.74:8090 (ECONNREFUSED).
Solutions
Make sure the config devServer.proxy.target is correct.
JUnit 5 leverages features from Java 8 or later, such as lambda functions, making tests more powerful and easier to maintain.
JUnit 5 has added some very useful new features for describing, organizing, and executing tests. For instance, tests get better display names and can be organized hierarchically.
JUnit 5 is organized into multiple libraries, so only the features you need are imported into your project. With build systems such as Maven and Gradle, including the right libraries is easy.
JUnit 5 can use more than one extension at a time, which JUnit 4 could not (only one runner could be used at a time). This means you can easily combine the Spring extension with other extensions (such as your own custom extension).
JUnit 5 assertions are now in org.junit.jupiter.api.Assertions. Most of the common assertions, such as assertEquals() and assertNotNull(), look the same as before, but there are a few differences:
The error message is now the last argument, for example: assertEquals("my message", 1, 2) is now assertEquals(1, 2, "my message").
Most assertions now accept a lambda that constructs the error message, which is called only when the assertion fails.
assertTimeout() and assertTimeoutPreemptively() have replaced the @Timeout annotation (there is an @Timeout annotation in JUnit 5, but it works differently than in JUnit 4).
There are several new assertions, described below.
Note that you can continue to use assertions from JUnit 4 in a JUnit 5 test if you prefer.
Assumptions
Executes the supplied Executable, but only if the supplied assumption is valid.
JUnit 4
assumeThat("alwaysPasses", 1, is(1)); // passes foo(); // will execute assumeThat("alwaysFails", 0, is(1)); // assumption failure! test halts intx=1 / 0; // will never execute
To convert an existing JUnit 4 test to JUnit 5, use the following steps, which should work for most tests:
Update imports to remove JUnit 4 and add JUnit 5. For instance, update the package name for the @Test annotation, and update both the package and class name for assertions (from Asserts to Assertions). Don’t worry yet if there are compilation errors, because completing the following steps should resolve them.
Globally replace old annotations and class names with new ones. For example, replace all @Before with @BeforeEach and all Asserts with Assertions.
Update assertions; any assertions that provide a message need to have the message argument moved to the end (pay special attention when all three arguments are strings!). Also, update timeouts and expected exceptions (see above for examples).
Update assumptions if you are using them.
Replace any instances of @RunWith, @Rule, or @ClassRule with the appropriate @ExtendWith annotations. You may need to find updated documentation online for the extensions you’re using for examples.
New Features
Display Names
you can add the @DisplayName annotation to classes and methods. The name is used when generating reports, which makes it easier to describe the purpose of tests and track down failures, for example:
assertAll() groups multiple assertions together. Asserts that all supplied executables do not throw exceptions. The added benefit is that all assertions are performed, even if individual assertions fail.
voidassertAll(Executable... executables)
assertThrows() and assertDoesNotThrow() have replaced the expected property in the @Test annotation.
<T extendsThrowable> T assertThrows(Class<T> expectedType, Executable executable)
voidassertDoesNotThrow(Executable executable)
Nested tests
Test suites in JUnit 4 were useful, but nested tests in JUnit 5 are easier to set up and maintain, and they better describe the relationships between test groups.
Parameterized tests
Test parameterization existed in JUnit 4, with built-in libraries such as JUnit4Parameterized or third-party libraries such as JUnitParams. In JUnit 5, parameterized tests are completely built in and adopt some of the best features from JUnit4Parameterized and JUnitParams, for example:
JUnit 5 provides the ExecutionCondition extension API to enable or disable a test or container (test class) conditionally. This is like using @Disabled on a test but it can define custom conditions. There are multiple built-in conditions, such as these:
@EnabledOnOs and @DisabledOnOs: Enables or disables a test only on specified operating systems
@EnabledOnJre and @DisabledOnJre: Specifies the test should be enabled or disabled for particular versions of Java
@EnabledIfSystemProperty: Enables a test based on the value of a JVM system property
@EnabledIf: Uses scripted logic to enable a test if scripted conditions are met
Test templates
Test templates are not regular tests; they define a set of steps to perform, which can then be executed elsewhere using a specific invocation context. This means that you can define a test template once, and then build a list of invocation contexts at runtime to run that test with. For details and examples, see the documentation.
Dynamic tests
Dynamic tests are like test templates; the tests to run are generated at runtime. However, while test templates are defined with a specific set of steps and run multiple times, dynamic tests use the same invocation context but can execute different logic. One use for dynamic tests would be to stream a list of abstract objects and perform a separate set of assertions for each based on their concrete types. There are good examples in the documentation.
Spring Boot Test With JUnit
Spring Boot Test With JUnit 4
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <!-- Starting with Spring Boot 2.4, JUnit 5’s vintage engine has been removed from spring-boot-starter-test. If we still want to write tests using JUnit 4, we need to add the following Maven dependency --> <dependency> <groupId>org.junit.vintage</groupId> <artifactId>junit-vintage-engine</artifactId> <scope>test</scope> <exclusions> <exclusion> <groupId>org.hamcrest</groupId> <artifactId>hamcrest-core</artifactId> </exclusion> </exclusions> </dependency>
Although you probably won’t need to convert your old JUnit 4 tests to JUnit 5 unless you want to use new JUnit 5 features, there are compelling reasons to switch to JUnit 5.
Given two integers dividend and divisor, divide two integers without using multiplication, division, and mod operator.
The integer division should truncate toward zero, which means losing its fractional part. For example, 8.345 would be truncated to 8, and -2.7335 would be truncated to -2.
Return the quotient after dividingdividendbydivisor.
Note: Assume we are dealing with an environment that could only store integers within the 32-bit signed integer range: [−2^31, 2^31 − 1]. For this problem, if the quotient is strictly greater than2^31 - 1, then return 2^31 - 1, and if the quotient is strictly less than-2^31, then return -2^31.
Example 1:
Input: dividend = 10, divisor = 3 Output: 3 Explanation: 10/3 = 3.33333.. which is truncated to 3.
Example 2:
Input: dividend = 7, divisor = -3 Output: -2 Explanation: 7/-3 = -2.33333.. which is truncated to -2.
Constraints:
-2^31 <= dividend, divisor <= 2^31 - 1
divisor != 0
Related Topics
Math
Bit Manipulation
Analysis
set quotient = 0 n ∈ N when divisor * 2 ^ n <= dividend < divisor * 2 ^ (n+1) quotient = quotient + (2 ^ n) dividend = dividend - (divisor ^ n) when divisor <= dividend < divisor * 2 quotient = quotient + 1 dividend = dividend - divisor when dividend < divisor return quotient
Solution
publicintdivide(int dividend, int divisor) { if (dividend == Integer.MIN_VALUE && divisor == -1) return Integer.MAX_VALUE; //Cornor case when -2^31 is divided by -1 will give 2^31 which doesnt exist so overflow
booleannegative= dividend < 0 ^ divisor < 0; //Logical XOR will help in deciding if the results is negative only if any one of them is negative
Apache POI is a Java API for Microsoft Documents processing. It provides pure Java libraries for reading and writing files in Microsoft Office formats, such as Word, PowerPoint and Excel. The code examples in this post are based on Apache POI v5.0.0.
Integer spacingBefore; // 1/20th point paragraph.setSpacingBefore(spacingBefore); Integer spacingBeforeLines; // 1/100th line paragraph.setSpacingBeforeLines(spacingBeforeLines); Integer spacingAfter; // 1/20th point paragraph.setSpacingAfter(spacingAfter); Integer spacingAfterLines; // 1/100th line paragraph.setSpacingAfterLines(spacingAfterLines); Integer spacingBetween; // 1 line or 1 point, It depends on what LineSpacingRule used paragraph.setSpacingBetween(spacingBetween, LineSpacingRule.AUTO); paragraph.setSpacingBetween(spacingBetween, LineSpacingRule.EXACT);
// align CellUtil.setAlignment(cell, HorizontalAlignment.LEFT); CellUtil.setVerticalAlignment(cell, VerticalAlignment.TOP); // wrap text CellUtil.setCellStyleProperty(cell, CellUtil.WRAP_TEXT, true); // font CellUtil.setFont(cell, font);
Insert Images
InputStreaminputStream=newFileInputStream("C:\\Users\\Taogen\\Desktop\\demo.png"); byte[] bytes = IOUtils.toByteArray(inputStream); intpictureIdx= workbook.addPicture(bytes, Workbook.PICTURE_TYPE_PNG); inputStream.close(); //Returns an object that handles instantiating concrete classes CreationHelperhelper= workbook.getCreationHelper(); //Creates the top-level drawing patriarch. Drawingdrawing= sheet.createDrawingPatriarch(); //Create an anchor that is attached to the worksheet ClientAnchoranchor= helper.createClientAnchor(); //set top-left corner for the image anchor.setCol1(1); anchor.setRow1(2); //Creates a picture Picturepicture= drawing.createPicture(anchor, pictureIdx); //Reset the image size doublescale=0.2; picture.resize(scale); // or picture.resize() original size
In the SQL world, order is not an inherent property of a set of data. Thus, you get no guarantees from your RDBMS that your data will come back in a certain order – or even in a consistent order – unless you query your data with an ORDER BY clause.
If an <order by clause> is not specified, then the ordering of the rows is implementation-dependent.
Default Orders
MySQL Server v5.6, InnoDB
If select fields only in unique key/index, the default order is ordered by unique key/index.
if select fields only in primary key, the default order is ordered by unique key/index or primary key
if select fields contain a field that is not in the primary key, unique key, and index, the default order is ordered by the primary key.
Suggestions
Do not depend on order when ORDER BY is missing.
Always specify ORDER BY if you want a particular order.
If to exported data is not large and the exporting process can be finished in a few seconds, we can just use the synchronized exporting. Send an export request, and then download the organized data file.
Asynchronized Export
If to exported data is too large, exporting will cost a lot of time. So we need to use the asynchronized exporting. Send an export request, view the schedule of the export, wait for the handling of exporting files to be finished, and download the organized data file.
Ways of Export implementations
Write data into Java servlet response output stream.
Write data into file and store in user temporary directory user.dir, and return download file URI. Delete file when download is finished.
Upload data to OSS, and return download file URL.
The Limit of Max Size of Exported Data
Sometimes we need consider to setting the max size of data for exporting.
Fetch External Data
Fetch data from relational databases
Fetch static files with HTTP URLs
Build Exported Files
Exported File Types
Office Excel
Office Doc
Zip
Optimization
Database
SQL optimization. 1) Only query required columns. 2) Add index for query.
Cache
Cache rarely modified database data to Redis or memory.
Network IO
Fetch Database data
Fetch rows with multiple threads.
Each time to fetch a limited number of rows. It depends on the data size of a row. E.g. for the small size of a row, you can fetch 500 rows at a time.
Fetch Static Files by URL
Fetch files with multiple threads. E.g. 20 threads.
Caching files in the temporary directory.
Using blocking NIO, non-blocking NIO, or NIO2.
Compression
Compress images.
Compress text files.
Compress binary files.
Disk IO
When using traditional blocking IO, reading and writing files should use buffered Input/output wrapper class (E.g. BufferedInputStream) or read from/write to direct buffering array.
// buffered Input/output wrapper class FileInputStreamfis=newFileInputStream(filepath); BufferedInputStreambis=newBufferedInputStream(fis);
// direct buffering array FileInputStreamfis=newFileInputStream(filepath); byte buf[] = newbyte[2048]; int len; while ((len = fis.read(buf)) != -1) {}
(Optional) Using blocking NIO, non-blocking NIO, or NIO2. The Java NIO package offers the possibility to transfer bytes between two Channels without buffering them into the application memory. In single thread environment, traditional IO is better. NIO is used not because it’s faster but because it has better scalability especially there are amounts of clients.