Log4j2.xml Eclipse 添加自动提示

参考:https://issues.apache.org/jira/browse/LOG4J2-411

Support of XSD/DTD linked to the configuration file

It would be very nice, if the the XML configuration uses an dedicated namespace, e.g. http://logging.apache.org/log4j/2.0/config.
This feature allows using XML catalogs to locate the schema locally, e.g. with Eclipse IDE.
The Log4j-events.xsd contains already such a declaration:

targetNamespace="http://logging.apache.org/log4j/2.0/events"

Then the configuration XML file needs only a small extension:

log4j2.xml
<?xml version="1.0" encoding="utf-8"?>
<Configuration status="WARN" xmlns="http://logging.apache.org/log4j/2.0/config">
  <Appenders>
    <Console name="Console" target="SYSTEM_OUT">
      <PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
    </Console>
  </Appenders>
  <Loggers>
    <Root level="error">
      <AppenderRef ref="Console" />
    </Root>
  </Loggers>
</Configuration>

开发高效率 Git命令 Cherry-Pick 摘樱桃

在实际的项目开发中(使用Git版本控制),在所难免会遇到没有切换分支开发、需要在另一个分支修改bug然后合并到当前分支的情况。之前遇到这种第一反应就是将分支合并过去来解决问题。如果你那些提交当中也穿插了其他人的提交而且他们的提交不可以合并到另一个分支,那么使用分支的合并将明显变得困难。下面分享给大家一个非常好用Git的命令Cherry-Pick来处理这些情况,从而提高开发的效率。

git Cherry-Pick 命令可以选择某一个分支中的一个或几个commit(s)来进行操作。你可以理解merge的个性定制版本

多次使用时要按提交的顺序进行合并,不然会导致某些文件发生冲突。这也是 容易 踩的坑。

  1. 当你的需求还没有完成的时候,其他人应该切换到另一分支开发的时候,你可以先在当前分支继续开发完,然后再选择Cherry-Pick命令合并过去就可以了。
  2. 当你需要将某个人的commits合并到另一开分时候,可以选择Cherry-Pick命令。(在实际的项目开发中,在所难免有人会提交错分支)
  3. 当你切换到某条分支修改Bug后,需要将修改提交合并另一分支,可以选择Cherry-Pick命令。

Firebug 宣布不再维护,讲不出再见!

Firebug 在其官方网站上宣布 —— “Firebug 扩展不再进行开发或维护,我们邀请您使用 Firefox 的内置开发工具以代替”。

Firebug 是 Firefox 下的一款开发类插件,现属于 Firefox 的五星级强力推荐插件之一。它集 HTML 查看和编辑、Javascript 控制台、网络状况监视器于一体,是开发 JavaScript、CSS、HTML 和 Ajax 的得力助手。Firebug 如同一把精巧的瑞士军刀,从各个不同的角度剖析 Web 页面内部的细节层面,给 Web 开发者带来很大的便利。

来自:http://getfirebug.com/

Adding SQLCipher to Xcode Projects IOS MAC

Adding SQLCipher to Xcode Projects

SQLite is already a popular API for persistent data storage in iOS apps so the upside for development is obvious. As a programmer you work with a stable, well-documented API that happens to have many good wrappers available in Objective-C, such as FMDB and Encrypted Core Data. All security concerns are cleanly decoupled from application code and managed by the underlying framework.

The framework code of the SQLCipher project is open source, so users can be confident that an application isn’t using insecure or proprietary security code. In addition, SQLCipher can also be compiled on Android, Linux, OS X and Windows for those developing cross-platform applications.

Using SQLCipher in an iOS app is fairly straightforward. This document describes integrating SQLCipher into an existing iOS project using the Community Edition source code build process. This tutorial assumes some familiarity with basic iOS app development and a working install of Xcode (6.1.1). The same basic steps can be applied to OS X projects as well.

🔥 Hot Tip: Commercial Edition static libraries are available for you to drop right into your project if you’d like to skip all this and receive personalized support from our crack development team! Binaries and helpful projects integrations are available for all supported platforms. Learn more »

Prerequisites

Xcode with an iOS or OS X SDK installed. Visit the Apple Developer site for more information on downloading the latest Xcode and iOS and OS X SDKs.

OpenSSL

OpenSSL is no longer required for building SQLCipher on iOS and OS X, as the project by default uses Apple’s CommonCrypto framework for hardware-accelerated encryption. You can still build SQLCipher with other crypto providers like OpenSSL if you’d prefer, or you can write your own.

SQLCipher

Fire up the Terminal app, switch into your project’s root directory and checkout the SQLCipher project code using Git:

$ cd ~/Documents/code/SQLCipherApp
$ git clone https://github.com/sqlcipher/sqlcipher.git

Xcode Project Configuration

The SQLCipher source provides a sqlcipher.xcodeproj project file that we’ll add to your project to build a static library that you’ll link from your main application target.

Add Project Reference

Open your iOS app’s project or workspace in Xcode, open the Project Navigator (command+1), and click on the top-level Project icon for your iOS app. Right click on the project and choose “Add Files to “My App”” (the label will vary depending on your app’s name). Since we cloned SQLCipher directly into the same folder as your iOS app you should see a sqlcipher folder in your root project folder. Open this folder and select sqlcipher.xcodeproj:

Add Files to 'My App'

Project References

Project Settings

Navigate to your Project settings (make sure you don’t select the application target level). Select the Build Settings pane. In the search field, type in “Header Search Paths”. Double-click on the field under the target column and add the following path: $(PROJECT_DIR)/sqlcipher/src:

Next, add a setting to ensure that SQLCipher is the first library linked with your application in the “Other Linker Flags” setting. Start typing “Other Linker Flags” into the search field until the setting appears, double click to edit it, and add the following value: $(BUILT_PRODUCTS_DIR)/libsqlcipher.a

You will next edit one other setting on your Project to ensure the SQLCipher builds correctly—”Other C Flags.” Start typing “Other C Flags” into the search field until the setting appears, double click to edit it, and in the pop-up add the following value: -DSQLITE_HAS_CODEC

Target Settings

Next, navigate to the Target Level settings. Add a Target dependency to each of your application targets to ensure that SQLCipher is compiled before the application code. In Xcode’s Project Navigator (command+1), select your app’s Project file and in the Editor pane select Build Phases and your app’s main target (not the project file).

Expand Target Dependencies and click on the + button at the end of the list. In the browser that opens, select the sqlcipher static library target:

Add Target Dependency

Expand Link Binary With Libraries, click on the +button at the end of the list, and select the libsqlcipher.a library.

Link Binary With Libraries

Finally, also under Link With Libraries, add Security.framework.

🔥 Hot Tip: If libsqlite3.dylib or another SQLite framework is listed in your Link Binary With Libraries list be sure to remove it!

Repeat these steps for any other targets in your project that will depend on SQLCipher, i.e. unit tests.

Integration Code

Now that the SQLCipher library is incorporated into the project you can start using the library immediately. Telling SQLCipher to encrypt a database is easy:

  • Open the database
  • Use the sqlite3_key function to provide key material. In most cases this should occur as the first operation after opening the database.
  • Run a query to verify the database can be opened (i.e. by querying the schema)
  • As a precautionary measure, run a query to ensure that the application is using SQLCipher on the active connection
#import <sqlite3.h>

...
NSString *databasePath = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]
                          stringByAppendingPathComponent: @"sqlcipher.db"];
sqlite3 *db;
bool sqlcipher_valid = NO;

if (sqlite3_open([databasePath UTF8String], &db) == SQLITE_OK) {
    const char* key = [@"BIGSecret" UTF8String];
    sqlite3_key(db, key, strlen(key));
    if (sqlite3_exec(db, (const char*) "SELECT count(*) FROM sqlite_master;", NULL, NULL, NULL) == SQLITE_OK) {
      if(sqlite3_prepare_v2(database, "PRAGMA cipher_version;", -1, &stmt, NULL) == SQLITE_OK) {
        if(sqlite3_step(stmt)== SQLITE_ROW) {
          const unsigned char *ver = sqlite3_column_text(stmt, 0);
          if(ver != NULL) {
            sqlcipher_valid = YES;

            // password is correct (or database initialize), and verified to be using sqlcipher

          }
        }
        sqlite3_finalize(stmt);
      }
    }
    sqlite3_close(db);
}

In most cases SQLCipher uses PBKDF2, a salted and iterated key derivation function, to obtain the encryption key. Alternately, an application can tell SQLCipher to use a specific binary key in blob notation (note that SQLCipher requires exactly 256 bits of key material), i.e.

PRAGMA key = "x'2DD29CA851E7B56E4697B0E1F08507293D761A05CE4D1B628663F411A8086D99'";

Once the key is set SQLCipher will automatically encrypt all data in the database! Note that if you don’t set a key then SQLCipher will operate identically to a standard SQLite database.

Testing and Verification

There are a number of ways that you can verify SQLCipher is working as expected in your applications before its release to users.

After the application is wired up to use SQLCipher, take a peek at the resulting data files to make sure everything is in order. An ordinary SQLite database will look something like the following under hexdump. Note that the file type, schema, and data are clearly readable.

% hexdump -C plaintext.db
00000000  53 51 4c 69 74 65 20 66  6f 72 6d 61 74 20 33 00  |SQLite format 3.|
00000010  04 00 01 01 00 40 20 20  00 00 00 04 00 00 00 00  |.....@  ........|
...
000003b0  00 00 00 00 24 02 06 17  11 11 01 35 74 61 62 6c  |....$......5tabl|
000003c0  65 74 32 74 32 03 43 52  45 41 54 45 20 54 41 42  |et2t2.CREATE TAB|
000003d0  4c 45 20 74 32 28 61 2c  62 29 24 01 06 17 11 11  |LE t2(a,b)$.....|
000003e0  01 35 74 61 62 6c 65 74  31 74 31 02 43 52 45 41  |.5tablet1t1.CREA|
000003f0  54 45 20 54 41 42 4c 45  20 74 31 28 61 2c 62 29  |TE TABLE t1(a,b)|
...
000007d0  00 00 00 14 02 03 01 2d  02 74 77 6f 20 66 6f 72  |.......-.two for|
000007e0  20 74 68 65 20 73 68 6f  77 15 01 03 01 2f 01 6f  | the show..../.o|
000007f0  6e 65 20 66 6f 72 20 74  68 65 20 6d 6f 6e 65 79  |ne for the money|

Fire up the SQLCipher application in simulator and look for the application database files under /Users/sjlombardo/Library/Application Support/iPhone Simulator/5.0/Applications/<Instance ID>/Documents. Try running hexdump on the application database. With SQLCipher the output should looks completely random, with no discerning characteristics at all.

% hexdump -C sqlcipher.db
00000000  1b 31 3c e3 aa 71 ae 39  6d 06 f6 21 63 85 a6 ae  |.1<..q.9m..!c...|
00000010  ca 70 91 3e f5 a5 03 e5  b3 32 67 2e 82 18 97 5a  |.p.>.....2g....Z|
00000020  34 d8 65 95 eb 17 10 47  a7 5e 23 20 21 21 d4 d1  |4.e....G.^# !!..|
...
000007d0  af e8 21 ea 0d 4f 44 fe  15 b7 c2 94 7b ee ca 0b  |..!..OD.....{...|
000007e0  29 8b 72 93 1d 21 e9 91  d4 3c 99 fc aa 64 d2 55  |).r..!...<...d.U|
000007f0  d5 e9 3f 91 18 a9 c5 4b  25 cb 84 86 82 0a 08 7f  |..?....K%.......|
00000800

Other sensible testing steps include:

  • Attempt to open a database with a correct key and verify that the operation succeeds
  • Attempt to open a database with an incorrect key and verify that the operation fails
  • Attempt to open a database without any key, and verify the operation fails
  • Programtically inspect the first 16 bytes of the database file and ensure that it contains random data (i.e. not the string SQLite Format 3\0)

来源:https://www.zetetic.net/sqlcipher/ios-tutorial/

每个程序员都应该收藏的算法复杂度速查表

英文:http://bigocheatsheet.com/
编译:Linux中国
链接:https://linux.cn/article-7480-1.html

这篇文章覆盖了计算机科学里面常见算法的时间和空间的大 OBig-O 复杂度。我之前在参加面试前,经常需要花费很多时间从互联网上查找各种搜索和排序算法的优劣,以便我在面试时不会被问住。最近这几年,我面试了几家硅谷的初创企业和一些更大一些的公司,如 Yahoo、eBay、LinkedIn 和 Google,每次我都需要准备这个,我就在问自己,“为什么没有人创建一个漂亮的大 O 速查表呢?”所以,为了节省大家的时间,我就创建了这个,希望你喜欢!

— Eric[1]

图例

数据结构操作

数组排序算法

图操作

堆操作

大 O 复杂度图表

Big O 复杂度

推荐阅读

  • Cracking the Coding Interview: 150 Programming Questions and Solutions[33]
  • Introduction to Algorithms, 3rd Edition[34]
  • Data Structures and Algorithms in Java (2nd Edition)[35]
  • High Performance Java (Build Faster Web Application Interfaces)[36]

关注「算法爱好者」

Json Web Token 详解 [转]

Understanding JWT

JSON Web Tokens (JWT) are a standard way of representing security claims between the add-on and the Atlassian host product. A JWT token is simply a signed JSON object which contains information which enables the receiver to authenticate the sender of the request.

Table of contents

Structure of a JWT token

A JWT token looks like this:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjEzODY4OTkxMzEsImlzcyI6ImppcmE6MTU0ODk1OTUiLCJxc2giOiI4MDYzZmY0Y2ExZTQxZGY3YmM5MGM4YWI2ZDBmNjIwN2Q0OTFjZjZkYWQ3YzY2ZWE3OTdiNDYxNGI3MTkyMmU5IiwiaWF0IjoxMzg2ODk4OTUxfQ.uKqU9dTB6gKwG6jQCuXYAiMNdfNRw98Hw_IWuA5MaMo

Once you understand the format, it’s actually pretty simple:

<base64url-encoded header>.<base64url-encoded claims>.<base64url-encoded signature>

In other words:

  • You create a header object, with the JSON format. Then you encode it in base64url
  • You create a claims object, with the JSON format. Then you encode it in base64url
  • You create a signature for the URI (we’ll get into that later). Then you encode it in base64url
  • You concatenate the three items, with the “.” separator

You shouldn’t actually have to do this manually, as there are libraries available in most languages, as we describe in the JWT libraries section. However it is important you understand the fields in the JSON header and claims objects described in the next sections:

Header

The header object declares the type of the encoded object and the algorithm used for the cryptographic signature. Atlassian Connect always uses the same values for these. The typ property will be “JWT” and the alg property will be “HS256”.

{
“typ”:”JWT”,
“alg”:”HS256″
}
Attribute Type Description
“typ” String Type for the token, defaulted to “JWT”. Specifies that this is a JWT token
“alg” (mandatory) String Algorithm. specifies the algorithm used to sign the token. In atlassian-connect version 1.0 we support the HMAC SHA-256 algorithm, which the JWT specificationidentifies using the string “HS256”.

Important

Your JWT library or implementation should discard any tokens which specify alg: none as this can provide a bypass of the token verification.

Claims

The claims object contains security information about the message you’re transmitting. The attributes of this object provide information to ensure the authenticity of the claim. The information includes the issuer, when the token was issued, when the token will expire, and other contextual information, described below.

{
“iss”: “jira:1314039”,
“iat”: 1300819370,
“exp”: 1300819380,
“qsh”: “8063ff4ca1e41df7bc90c8ab6d0f6207d491cf6dad7c66ea797b4614b71922e9”,
“sub”: “batman”,
“context”: {
“user”: {
“userKey”: “batman”,
“username”: “bwayne”,
“displayName”: “Bruce Wayne”
}
}
}
Attribute Type Description
iss(mandatory) String the issuer of the claim. Connect uses it to identify the application making the call. for example:

  • If the Atlassian product is the calling application: contains the unique identifier of the tenant. This is the clientKey that you receive in theinstalled callback. You should reject unrecognised issuers.
  • If the add-on is the calling application: the add-on key specified in the add-on descriptor
iat(mandatory) Long Issued-at time. Contains the UTC Unix time at which this token was issued. There are no hard requirements around this claim but it does not make sense for it to be significantly in the future. Also, significantly old issued-at times may indicate the replay of suspiciously old tokens.
exp(mandatory) Long Expiration time. It contains the UTC Unix time after which you should no longer accept this token. It should be after the issued-at time.
qsh(mandatory) String query string hash. A custom Atlassian claim that prevents URL tampering.
sub(optional) String The subject of this token. This is the user associated with the relevant action, and may not be present if there is no logged in user.
aud(optional) String or String[] The audience(s) of this token. For REST API calls from an add-on to a product, the audience claim can be used to disambiguate the intended recipients. This attribute is not used for JIRA and Confluence at the moment, but will become mandatory when making REST calls from an add-on to e.g. the bitbucket.org domain.
context(optional) Object The context claim is an extension added by Atlassian Connect which may contain useful context for outbound requests (from the product to your add-on). The current user (the same user in the sub claim) is added to the context. This contains the userKey, username and display name for the subject.

"context": {
    "user": {
        "userKey": "batman",
        "username": "bwayne",
        "displayName": "Bruce Wayne"
    }
}
  • userKey — the primary key of the user. Anytime you want to store a reference to a user in long term storage (eg a database or index) you should use the key because it can never change. The user key should never be displayed to the user as it may be a non human readable value.
  • username — a unique secondary key, but should not be stored in long-term storage because it can change over time. This is the value that the user logs into the application with, and may be displayed to the user.
  • displayName — the user’s name.

You should use a little leeway when processing time-based claims, as clocks may drift apart. The JWT specification suggests no more than a few minutes. Judicious use of the time-based claims allows for replays within a limited window. This can be useful when all or part of a page is refreshed or when it is valid for a user to repeatedly perform identical actions (e.g. clicking the same button).

Signature

The signature of the token is a combination of a hashing algorithm combined with the header and claims sections of the token. This provides a way to verify that the claims and headers haven’t been been compromised during transmission. The signature will also detect if a different secret is used for signing. In the JWT spec, there are multiple algorithms you can use to create the signature, but Atlassian Connect uses the HMAC SHA-256 algorithm. If the JWT token has no specified algorithm, you should discard that token as they’re not able to be signature verified.

JWT libraries

Most modern languages have JWT libraries available. We recommend you use one of these libraries (or other JWT-compatible libraries) before trying to hand-craft the JWT token.

Language Library
Java atlassian-jwt and jsontoken
Python pyjwt
Node.js node-jwt-simple
Ruby ruby-jwt
PHP firebase php-jwt and luciferous jwt
.NET jwt
Haskell haskell-jwt

The JWT decoder is a handy web based decoder for Atlassian Connect JWT tokens.

Creating a JWT token

Here is an example of creating a JWT token, in Java, using atlassian-jwt and nimbus-jwt (last tested with atlassian-jwt version 1.5.3 and nimbus-jwt version 2.16):

import java.io.UnsupportedEncodingException;
import java.security.NoSuchAlgorithmException;
import java.util.HashMap;
import com.atlassian.jwt.*;
import com.atlassian.jwt.core.writer.*;
import com.atlassian.jwt.httpclient.CanonicalHttpUriRequest;
import com.atlassian.jwt.writer.JwtJsonBuilder;
import com.atlassian.jwt.writer.JwtWriterFactory;public class JWTSample {

public String createUriWithJwt()
throws UnsupportedEncodingException, NoSuchAlgorithmException {
long issuedAt = System.currentTimeMillis() / 1000L;
long expiresAt = issuedAt + 180L;
String key = “atlassian-connect-addon”; //the key from the add-on descriptor
String sharedSecret = “…”;    //the sharedsecret key received
//during the add-on installation handshake
String method = “GET”;
String baseUrl = “https://<my-dev-environment>.atlassian.net/”;
String contextPath = “/”;
String apiPath = “/rest/api/latest/serverInfo”;

JwtJsonBuilder jwtBuilder = new JsonSmartJwtJsonBuilder()
.issuedAt(issuedAt)
.expirationTime(expiresAt)
.issuer(key);

CanonicalHttpUriRequest canonical = new CanonicalHttpUriRequest(method,
apiPath, contextPath, new HashMap());
JwtClaimsBuilder.appendHttpRequestClaims(jwtBuilder, canonical);

JwtWriterFactory jwtWriterFactory = new NimbusJwtWriterFactory();
String jwtbuilt = jwtBuilder.build();
String jwtToken = jwtWriterFactory.macSigningWriter(SigningAlgorithm.HS256,
sharedSecret).jsonToJwt(jwtbuilt);

String apiUrl = baseUrl + apiPath + “?jwt=” + jwtToken;
return apiUrl;
}
}

Decoding and verifying a JWT token

Here is a minimal example of decoding and verifying a JWT token, in Java, using atlassian-jwt and nimbus-jwt (last tested with atlassian-jwt version 1.5.3 and nimbus-jwt version 2.16).

NOTE: This example does not include any error handling. See AbstractJwtAuthenticator from atlassian-jwt for recommendations of how to handle the different error cases.

import com.atlassian.jwt.*;
import com.atlassian.jwt.core.http.JavaxJwtRequestExtractor;
import com.atlassian.jwt.core.reader.*;
import com.atlassian.jwt.exception.*;
import com.atlassian.jwt.reader.*;
import javax.servlet.http.HttpServletRequest;
import java.io.UnsupportedEncodingException;
import java.security.NoSuchAlgorithmException;
import java.util.Map;

public class JWTVerificationSample {

public Jwt verifyRequest(HttpServletRequest request,
JwtIssuerValidator issuerValidator,
JwtIssuerSharedSecretService issuerSharedSecretService)
throws UnsupportedEncodingException, NoSuchAlgorithmException,
JwtVerificationException, JwtIssuerLacksSharedSecretException,
JwtUnknownIssuerException, JwtParseException {
JwtReaderFactory jwtReaderFactory = new NimbusJwtReaderFactory(
issuerValidator, issuerSharedSecretService);
JavaxJwtRequestExtractor jwtRequestExtractor = new JavaxJwtRequestExtractor();
CanonicalHttpRequest canonicalHttpRequest
= jwtRequestExtractor.getCanonicalHttpRequest(request);
Map requiredClaims = JwtClaimVerifiersBuilder.build(canonicalHttpRequest);
String jwt = jwtRequestExtractor.extractJwt(request);
return jwtReaderFactory.getReader(jwt).readAndVerify(jwt, requiredClaims);
}
}

Decoding a JWT token

Decoding the JWT token reverses the steps followed during the creation of the token, to extract the header, claims and signature. Here is an example in Java:

String jwtToken = …;//e.g. extracted from the request
String[] base64UrlEncodedSegments = jwtToken.split(‘.’);
String base64UrlEncodedHeader = base64UrlEncodedSegments[0];
String base64UrlEncodedClaims = base64UrlEncodedSegments[1];
String signature = base64UrlEncodedSegments[2];
String header = base64Urldecode(base64UrlEncodedHeader);
String claims = base64Urldecode(base64UrlEncodedClaims);

This gives us the following:

Header:

{
“alg”: “HS256”,
“typ”: “JWT”
}

Claims:

 {
“iss”: “jira:15489595”,
“iat”: 1386898951,
“qsh”: “8063ff4ca1e41df7bc90c8ab6d0f6207d491cf6dad7c66ea797b4614b71922e9”,
“exp”:
}

Signature:

uKqU9dTB6gKwG6jQCuXYAiMNdfNRw98Hw_IWuA5MaMo

Verifying a JWT token

JWT libraries typically provide methods to be able to verify a received JWT token. Here is an example using nimbus-jose-jwt and json-smart:

import com.nimbusds.jose.JOSEException;
import com.nimbusds.jose.JWSObject;
import com.nimbusds.jose.JWSVerifier;
import com.nimbusds.jwt.JWTClaimsSet;
import net.minidev.json.JSONObject;public JWTClaimsSet read(String jwt, JWSVerifier verifier) throws ParseException, JOSEException
{
JWSObject jwsObject = JWSObject.parse(jwt);

if (!jwsObject.verify(verifier))
{
throw new IllegalArgumentException(“Fraudulent JWT token: ” + jwt);
}

JSONObject jsonPayload = jwsObject.getPayload().toJSONObject();
return JWTClaimsSet.parse(jsonPayload);
}

Creating a query string hash

A query string hash is a signed canonical request for the URI of the API you want to call.

1
qsh = `sign(canonical-request)`
2
canonical-request = `canonical-method + '&' + canonical-URI + '&' + canonical-query-string`

A canonical request is a normalised representation of the URI. Here is an example. For the following URL, assuming you want to do a “GET” operation:

"https://<my-dev-environment>.atlassian.net/path/to/service?zee_last=param&repeated=parameter 1&first=param&repeated=parameter 2"

The canonical request is

"GET&/path/to/service&first=param&repeated=parameter%201,parameter%202&zee_last=param"

To create a query string hash, follow the detailed instructions below:

  1. Compute canonical method
    • Simply the upper-case of the method name (e.g. "GET" or "PUT")
  2. Append the character '&'
  3. Compute canonical URI
    • Discard the protocol, server, port, context path and query parameters from the full URL.
      • For requests targeting add-ons discard the baseUrl in the add-on descriptor.
    • Removing the context path allows a reverse proxy to redirect incoming requests for"jira.example.com/getsomething" to "example.com/jira/getsomething" without breaking authentication. The requester cannot know that the reverse proxy will prepend the context path"/jira" to the originally requested path "/getsomething"
    • Empty-string is not permitted; use "/" instead.
    • Url-encode any '&' characters in the path.
    • Do not suffix with a '/' character unless it is the only character. e.g.
      • Canonical URI of "https://example.atlassian.net/wiki/some/path/?param=value" is"/some/path"
      • Canonical URI of "https://example.atlassian.net" is "/"
  4. Append the character '&'
  5. Compute canonical query string
    • The query string will use percent-encoding.
    • Sort the query parameters primarily by their percent-encoded names and secondarily by their percent-encoded values.
    • Sorting is by codepoint: sort(["a", "A", "b", "B"]) => ["A", "B", "a", "b"]
    • For each parameter append its percent-encoded name, the '=' character and then its percent-encoded value.
    • In the case of repeated parameters append the ',' character and subsequent percent-encoded values.
    • Ignore the jwt parameter, if present.
    • Some particular values to be aware of:
      • A whitespace character is encoded as "%20",
      • "+" as "%2B",
      • "*" as "%2A" and
      • "~" as "~".
        (These values used for consistency with OAuth1.)
  6. Convert the canonical request string to bytes
    • The encoding used to represent characters as bytes is UTF-8
  7. Hash the canonical request bytes using the SHA-256 algorithm
    • e.g. The SHA-256 hash of "foo" is `”2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae

Advanced: Creating a JWT token manually

Disclaimer

You should only need to read this section if you are planning to create JWT tokens manually, i.e. if you are not using one of the libraries listed in the previous section

More details on JWT tokens

The format of a JWT token is simple: <base64url-encoded header>.<base64url-encoded claims>.<signature>.

  • Each section is separated from the others by a period character (.).
  • Each section is base64url encoded, so you will need to decode each one to make them human-readable. Note that encoding with base64 and not base64url will result in an incorrect JWT token for payloads with non UTF-8 characters.
  • The header specifies a very small amount of information that the receiver needs in order to parse and verify the JWT token.
    • All JWT token headers state that the type is “JWT”.
    • The algorithm used to sign the JWT token is needed so that the receiver can verify the signature.
  • The claims are a list of assertions that the issuer is making: each says that “this named field” has “this value”.
    • Some, like the “iss” claim, which identifies the issuer of this JWT token, have standard names and uses.
    • Others are custom claims. We limit our use of custom claims as much as possible, for ease of implementation.
  • The signature is computed by using an algorithm such as HMAC SHA-256 plus the header and claims sections.
    • The receiver verifies that the signature must have been computed using the genuine JWT header and claims sections, the indicated algorithm and a previously established secret.
    • An attacker tampering with the header or claims will cause signature verification to fail.
    • An attacker signing with a different secret will cause signature verification to fail.
    • There are various algorithm choices legal in the JWT spec. In atlassian-connect version 1.0 we support HMAC SHA-256. Important: your implementation should discard any JWT tokens which specify alg: none as these are not subject to signature verification.

Steps to Follow

  1. Create a header JSON object
  2. Convert the header JSON object to a UTF-8 encoded string and base64url encode it. That gives you encodedHeader.
  3. Create a claims JSON object, including a query string hash
  4. Convert the claims JSON object to a UTF-8 encoded string and base64url encode it. That gives you encodedClaims.
  5. Concatenate the encoded header, a period character (.) and the encoded claims set. That gives you signingInput = encodedHeader+ “.” + encodedClaims.
  6. Compute the signature of signingInput using the JWT or cryptographic library of your choice. Then base64url encode it. That gives you encodedSignature.
  7. concatenate the signing input, another period character and the signature, which gives you the JWT token. jwtToken = signingInput + “.” + encodedSignature

Example

Here is an example in Java using gson, commons-codec, and the Java security and crypto libraries:

public class JwtClaims {
protected String iss;
protected long iat;
protected long exp;
protected String qsh;
protected String sub;
// + getters/setters/constructors
}
[…]
public class JwtHeader {
protected String alg;
protected String typ;
// + getters/setters/constructors
}
[…]
import static org.apache.commons.codec.binary.Base64.encodeBase64URLSafeString;
import static org.apache.commons.codec.binary.Hex.encodeHexString;
import java.io.UnsupportedEncodingException;
import java.security.*;
import javax.crypto.*;
import javax.crypto.spec.SecretKeySpec;
import com.google.gson.Gson;
public class JwtBuilder {
public static String generateJWTToken(String requestUrl, String canonicalUrl,
String key, String sharedSecret)
throws NoSuchAlgorithmException, UnsupportedEncodingException,
InvalidKeyException {
JwtClaims claims = new JwtClaims();
claims.setIss(key);
claims.setIat(System.currentTimeMillis() / 1000L);
claims.setExp(claims.getIat() + 180L);
claims.setQsh(getQueryStringHash(canonicalUrl));
String jwtToken = sign(claims, sharedSecret);
return jwtToken;
}
private static String sign(JwtClaims claims, String sharedSecret)
throws InvalidKeyException, NoSuchAlgorithmException {
String signingInput = getSigningInput(claims, sharedSecret);
String signed256 = signHmac256(signingInput, sharedSecret);
return signingInput + “.” + signed256;
}
private static String getSigningInput(JwtClaims claims, String sharedSecret)
throws InvalidKeyException, NoSuchAlgorithmException {
JwtHeader header = new JwtHeader();
header.alg = “HS256”;
header.typ = “JWT”;
Gson gson = new Gson();
String headerJsonString = gson.toJson(header);
String claimsJsonString = gson.toJson(claims);
String signingInput = encodeBase64URLSafeString(headerJsonString
.getBytes())
+ “.”
+ encodeBase64URLSafeString(claimsJsonString.getBytes());
return signingInput;
}
private static String signHmac256(String signingInput, String sharedSecret)
throws NoSuchAlgorithmException, InvalidKeyException {
SecretKey key = new SecretKeySpec(sharedSecret.getBytes(), “HmacSHA256”);
Mac mac = Mac.getInstance(“HmacSHA256”);
mac.init(key);
return encodeBase64URLSafeString(mac.doFinal(signingInput.getBytes()));
}
private static String getQueryStringHash(String canonicalUrl)
throws NoSuchAlgorithmException,UnsupportedEncodingException {
MessageDigest md = MessageDigest.getInstance(“SHA-256”);
md.update(canonicalUrl.getBytes(“UTF-8”));
byte[] digest = md.digest();
return encodeHexString(digest);
}
}
[…]
public class Sample {
public String getUrlSample() throws Exception {
String requestUrl =
“https://<my-dev-environment>.atlassian.net/rest/atlassian-connect/latest/license”;
String canonicalUrl = “GET&/rest/atlassian-connect/latest/license&”;
String key = “…”;     //from the add-on descriptor
//and received during installation handshake
String sharedSecret = “…”; //received during installation Handshake
String jwtToken = JwtBuilder.generateJWTToken(
requestUrl, canonicalUrl, key, sharedSecret);
String restAPIUrl = requestUrl + “?jwt=” + jwtToken;
return restAPIUrl;
}
}

Stateless Authentication implementation using JWT, Nginx+Lua and Memcached

If you already have an idea on stateless authentication and JWT then proceed with this implementation blog otherwise just go through the previous blog Stateless Authentication to get an idea.

As i mentioned in my previous blog JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA.

Client can access the the resources from different applications. So to validate the token at applications, we require the secret or a public/private key.

Problems of validating the token in every application

  1. We have to maintain the secret key in all the applications and have to write or inject the token validation logic in every application. The validation logic may include more than token validation like fingerprint mismatch, session idle time out and many more based on the requirement.
  2. If the applications are developed in different languages then we have to implement the token validation logic based on application technology stack and maintenance is very difficult.

Solution

Instead of maintaining the validation logic in every application, we can write our validation logic at one common place so that every request can make use of that logic irrespective of application (Note: Here Applications could be developed in any language). I have chosen reverse proxy server (Nginx) to maintain the validation logic with the help of Lua.

Advantages

  1. We don’t need to maintain the secret or private/public key in every application. Just maintain at authentication server side to generate a token and at proxy server (Nginx) to validate the token.
  2. Maintenance of the validation logic easy.

Before jumping in to the flow and implementation let’s see why we have chosen this technology stack.

Why JWT ? 

To achieve the stateless authentication we have chosen JWT (JSON Web Token). We can easily, securely transmitting information between parties as a JSON object. If we want to put some sensitive information in JWT token, we can encrypt the JWT payload itself using the JSON Web Encryption (JWE) specification.

Why Nginx + Lua ?

Nginx+Lua is a self-contained web server embedding the scripting language Lua. Powerful applications can be written directly inside Nginx without using cgi, fastcgi, or uwsgi. By adding a little Lua code to an existing Nginx configuration file, it is easy to add small features.

One of the core benefits of Nginx+Lua is that it is fully asynchronous. Nginx+Lua inherits the same event loop model that has made Nginx a popular choice of webserver. “Asynchronous” simply means that Nginx can interrupt your code when it is waiting on a blocking operation, such as an outgoing connection or reading a file, and run the code of another incoming HTTP Request.

Why Memcached ?

To keep the application more secured, along with the token validation we are doing the fingerprint check and handling idle time out as well. Means, if the user is idle for some time and not doing any action then user has to be logged out from the application. To do the fingerprint check and idle time out check, some information needs to be shared across the applications. To share the information across the applications we have chosen Memcached (Distributed Cache).

Note: If you don’t want to do fingerprint mismatch check and idle time out check, then you can simply ignore the Memcached component from the flow.

Flow

 

Untitled presentation (2)

 

Step 1

Client try to access the resource from the application with out JWT token or invalid token. As shown in the flow, request goes to the proxy server (Nginx).

Step 2

Nginx looks for the auth header (X-AUTH-TOKEN) and validates the token with the help of Lua.

 

Step 3

As token is not present or invalid, nginx sends below response to the client.

 

Step 4

Now user has to login in to the system, So client will load the login page.

Step 5

Client will send a request to the authenticate server to authenticate the user. Along with username and password client sends the fingerprint also. Here we are considering fingerprint to make sure that all the requests are initiating from the same device where user logged in to the system.

Sample authenticate request body

 

Step 6

Authenticate server validates the credentials and create a JWT token with TokenId (random generated UUID) as a claim and this tokenId is useful to uniquely identify the user. And set the JWT token in response header (X-AUTH-TOKEN).

Create JWT Token

Add this dependency to your pom.xml to work on JWT

While creating the token you can set any number of claims.

CustomClaim.java

Generated JWT token looks like below

And the JWT token payload looks like below. You can put what ever data you want like roles & permissions associated to him and so on…

 

Step 7

Put TokenId as a key and user meta information like fingerprint, last access time etc… as a value in memcached which is useful to verify the fingerprint and session idle time out at nginx side using Lua.

Sample Memcached content

 

Put Content in Memcached

Add this dependency to your pom.xml to work on Memcached

 

 

Step 8

Send back response to the client from authentication server with response header X-AUTH-TOKEN

 

Step 9

Fetch the token from response header and store it in local storage at client side. So that we can send this token in request header from next request onwards.

Step 10

Now client access the resource from application with valid JWT token. As shown in the flow request goes to the proxy server (Nginx). With every request client will send a fingerprint in some header and consider header name as “FINGER-PRINT”.

Step 11

Nginx validates the token. As token is valid, extract the TokenId from the JWT token to fetch the user meta information from memcached.

If there is no entry in the memcached with “TokenId” then Nginx simply senda a response as “LOGGED_OUT” to the client.

But in our case user is logged in into the system, So there will be an entry in memcached with TokenId. So fetch that user meta information to do the following checks.

Fingerprint mismatch : While sending the authenticate request, client is sending fingerprint along with username and password. We are storing that fingerprint value in memcached and we use this value to compare with the fingerprint which is coming in every request. If fingerprint matches, then it’s proceed further. Otherwise nginx will send a response to client saying that fingerprint is mismatched.

Session idle time out :  While successful authentication of a user at authentication server side, we are putting configured session_idle_timeout of a user in memcached. If it’s configured as “-1”, then we simply skip the session idle time out check. Otherwise for every request just we check whether session is idle or not. If session is not idle, we update the last_access_time value to current system time in memcached. If session is idle then Nginx send below response to the client.

Complete Validation Logic at Nginx using Lua

base-validation.lua

Step 12

Once the request gone through the above mentioned validation logic, Nginx proxy_pass the request to the application.

sample-nginx.conf

Step 13

Application sends a response of requested resource to the client.

How to achieve logout ?

There is a open question (unanswered) regarding how to achieve the log out at server side, if we go by the stateless authentication using JWT.

Mostly people are discussing about handling the log out at client side.

  • When user clicks on logout, simply client can remove the token from local storage.

But i come up with a solution to achieve the logout at server side by make use of Memcached.

  • When user clicks on logout, Remove the entry from Memcached which  we put it in Step 7. And client also can delete the token from local storage as well. If you see the validation logic which i have completely covered in Step 11, there i’m checking the entry in memcached. If there is no entry in memcached, means user logged out from the application.

如何在Git中撤销一切

翻译:李伟
审校:张帆
译自:Github

blob.png

任何一个版本控制系统中,最有用的特性之一莫过于 “撤销(undo)”操作。在Git中,“撤销”有很多种含义。

当你完成了一次新的提交(commit),Git会及时存储当前时刻仓库(repository)的快照(snapshot);你能够使用Git将项目回退到任何之前的版本。

下文中,我将列举几个常见的、需要“撤销”的场景,并且展示如何使用Git来完成这些操作。

一、撤销一个公共修改 Undo a “public” change

场景:你刚刚用git push将本地修改推送到了GitHub,这时你意识到在提交中有一个错误。你想撤销这次提交。

使用撤销命令:git revert

发生了什么:git revert将根据给定SHA的相反值,创建一个新的提交。如果旧提交是“matter”,那么新的提交就是“anti-matter”——旧提交中所有已移除的东西将会被添加进到新提交中,旧提交中增加的东西将在新提交中移除。

这是Git最安全、也是最简单的“撤销”场景,因为这样不会修改历史记录——你现在可以git push下刚刚revert之后的提交来纠正错误了。

二、修改最近一次的提交信息 Fix the last commit message

场景:你只是在最后的提交信息中敲错了字,比如你敲了git commit -m “Fxies bug #42″,而在执行git push之前你已经意识到你应该敲”Fixes bug #42″。

使用撤销命令:git commit –amend或git commit –amend -m “Fixes bug #42”

发生了什么:git commit –amend将使用一个包含了刚刚错误提交所有变更的新提交,来更新并替换这个错误提交。由于没有staged的提交,所以实际上这个提交只是重写了先前的提交信息。

三、撤销本地更改 Undo “local” changes

场景:当你的猫爬过键盘时,你正在编辑的文件恰好被保存了,你的编辑器也恰在此时崩溃了。此时你并没有提交过代码。你期望撤销这个文件中的所有修改——将这个文件回退到上次提交的状态。

使用撤销命令:git checkout —

发生了什么:git checkout将工作目录(working directory)里的文件修改成先前Git已知的状态。你可以提供一个期待回退分支的名字或者一个确切的SHA码,Git也会默认检出HEAD——即:当前分支的上一次提交。

注意:用这种方法“撤销”的修改都将真正的消失。它们永远不会被提交。因此Git不能恢复它们。此时,一定要明确自己在做什么!(或许可以用git diff来确定)

四、重置本地修改 Reset “local” changes

场景:你已经在本地做了一些提交(还没push),但所有的东西都糟糕透了,你想撤销最近的三次提交——就像它们从没发生过一样。

使用撤销命令:git reset或git reset –hard

发生了什么:git reset将你的仓库纪录一直回退到指定的最后一个SHA代表的提交,那些提交就像从未发生过一样。默认情况下,git reset会保留工作目录(working directory)。这些提交虽然消失了,但是内容还在磁盘上。这是最安全的做法,但通常情况是:你想使用一个命令来“撤销”所有提交和本地修改——那 么请使用–hard参数吧。

五、撤销本地后重做 Redo after undo “local”

场景:你已经提交了一些内容,并使用git reset –hard撤销了这些更改(见上面),突然意识到:你想还原这些修改!

使用撤销命令:git reflog和git reset, 或者git checkout

发生了什么:git reflog是一个用来恢复项目历史记录的好办法。你可以通过git reflog恢复几乎任何已提交的内容。

你或许对git log命令比较熟悉,它能显示提交列表。git reflog与之类似,只不过git reflog显示的是HEAD变更次数的列表。

一些说明:

1. 只有HEAD会改变。当你切换分支时,用git commit提交变更时,或是用git reset撤销提交时,HEAD都会改变。但当你用git checkout –时, HEAD不会发生改变。(就像上文提到的情形,那些更改根本就没有提交,因此reflog就不能帮助我们进行恢复了)

2.  git reflog不会永远存在。Git将会定期清理那些“不可达(unreachable)”的对象。不要期望能够在reflog里找到数月前的提交记录。

3.  reflog只是你个人的。你不能用你的reflog来恢复其他开发者未push的提交。

blob.png

因此,怎样合理使用reflog来找回之前“未完成”的提交呢?这要看你究竟要做什么:

1. 如果你想恢复项目历史到某次提交,那请使用git reset –hard

2. 如果你想在工作目录(working direcotry)中恢复某次提交中的一个或多个文件,并且不改变提交历史,那请使用git checkout–

3. 如果你想确切的回滚到某次提交,那么请使用git cherry-pick。

六、与分支有关的那些事 Once more, with branching

场景:你提交了一些变更,然后你意识到你正在master分支上,但你期望的是在feature分支上执行这些提交。

使用撤销命令:git branch feature, git reset –hard origin/master, 和 git checkout feature

发生了什么:你可能用的是git checkout -b来建立新的分支,这是创建和检出分支的便捷方法——但实际你并不想立刻切换分支。git branch feature会建立一个叫feature的分支,这个分支指向你最近的提交,但是你还停留在master分支上。

git reset –hard将master回退至origin/master,并忽略所有新提交。别担心,那些提交都还保留在feature上。

最后,git checkout将分支切换到feature,这个分支原封不动的保留了你最近的所有工作。

七、事半功倍处理分支 Branch in time saves nine

场景:你基于master新建了一个feature分支,但是master分支远远落后与origin/master。现在master分支与origin/master同步了,你期望此刻能在feature下立刻commit代码,并且不是在远远落后master的情况下。

使用撤销命令:git checkout feature和git rebase master

发生了什么:你也许已经敲了命令:git reset(但是没用–hard,有意在磁盘上保存这些提交内容),然后敲了git checkout -b,之后重新提交更改,但是那样的话,你将失去本地的提交记录。不过,一个更好的方法:

使用git rebase master可以做到一些事情:

1.首先,它定位你当前检出分支和master之间的共同祖先节点(common ancestor)。

2.然后,它将当前检出的分支重置到祖先节点(ancestor),并将后来所有的提交都暂存起来。

3.最后,它将当前检出分支推进至master末尾,同时在master最后一次提交之后,再次提交那些在暂存区的变更。

八、批量撤销/找回 Mass undo/redo

场景:你开始朝一个既定目标开发功能,但是中途你感觉用另一个方法更好。你已经有十几个提交,但是你只想要其中的某几个,其他的都可以删除不要。

使用撤销命令:git rebase -i

发生了什么:-i将rebases设置为“交互模式(interactive mode)”。rebase开始执行的操作就像上文讨论的一样,但是在重新执行某个提交时,它会暂停下来,让你修改每一次提交。

rebase –i将会打开你的默认文本编辑器,然后列出正在执行的提交,就像这样:

blob.png

前两列最关键:第一列是选择命令,它会根据第二列中的SHA码选择相应的提交。默认情况下,rebase –i会认为每个更改都正通过pick命令被提交。

要撤销一个提交,直接在编辑器删除对应的行就可以了。如果在你的项目不再需要这些错误的提交,你可以直接删除上图中的第1行和3-4行。

如 果你想保留提交但修改提交信息,你可以使用reword命令。即,将命令关键字pick换成reword(或者r)。你现在可能想立刻修改提交消息,但这 么做不会生效——rebase –i将忽略SHA列后的所有东西。现有的提交信息会帮助我们记住0835fe2代表什么。当你敲完rebase –i命令后,Git才开始提示你重写那些新提交消息。

如果你需要将2个提交合并,你可以用squash或者fixup命令,如下图:

blob.png

squash和fixup都是“向上”结合的——那些用了这些合并命令(编者按:指squash、fixup)的提交,将会和它之前的提交合并:上图中,0835fe2和6943e85将会合并成一个提交,而38f5e4e和af67f82将会合并成另一个提交。

当 你用squash时,Git将会提示是否填写新的提交消息;fixup则会给出列表中第一个提交的提交信息。在上图中,af67f82是一个 “Ooops”信息,因为这个提交信息已经同38f5e4e一样了。但是你可以为0835fe2和6943e85合并的新提交编写提交信息。

当你保存并退出编辑器时,Git将会按照从上到下的顺序执行你的提交。你可以在保存这些提交之前,修改提交的执行顺序。如果有需要,你可以将af67f82和0835fe2合并,并且可以这样排序:

blob.png

九、修复早先的提交 Fix an earlier commit

场景:之前的提交里落下了一个文件,如果先前的提交能有你留下的东西就好了。你还没有push,并且这个提交也不是最近的提交,因此你不能用commit –amend。

使用撤销命令:git commit –squash和git rebase –autosquash -i

发生了什么:git commit –squash将会创建一个新的提交,该提交信息可能像这样“squash! Earlier commit”。(你也可以手写这些提交信息,commit –squash只是省得让你打字了)。

如果你不想为合并的提交编写信息,也可以考虑使用命令git commit –fixup。这种情况下,你可能会使用commit –fixup,因为你仅希望在rebase中使用之前的提交信息。

rebase –autosquash –i将会启动rebase交互编辑器,编辑器会列出任何已完成的squash!和fixup!提交,如下图:

blob.png

当 使用–squash和–fixup时,你或许记不清你想修复的某个提交的SHA码——只知道它可能在一个或五个提交之前。你或许可以使用Git的^和~ 操作符手动找回。HEAD^表示HEAD的前一次提交。HEAD~4表示HEAD前的4次提交,加起来总共是前5次提交。

十、停止跟踪一个已被跟踪的文件 Stop tracking a tracked file

场景:你意外将application.log添加到仓库中,现在你每次运行程序,Git都提示application.log中有unstaged的提交。你在.gitignore中写上”*.log”,但仍旧没用——怎样告诉Git“撤销”跟踪这个文件的变化呢?

使用撤销命令: git rm –cached application.log

发生了什么:尽 管.gitignore阻止Git跟踪文件的变化,甚至是之前没被跟踪的文件是否存在,但是,一旦文件被add或者commit,Git会开始持续跟踪这 个文件的变化。类似的,如果你用git add –f来“强制”add,或者覆盖.gitignore,Git还是会继续监视变化。所以以后最好不要使用–f来add .gitignore文件。

如果你希望移除那些应当被忽略的文件,git rm –cached可以帮助你,并将这些文件保留在磁盘上。因为这个文件现在被忽略了,你将不会在git status中看到它,也不会再把这个文件commit了。

以上就是如何在Git上撤销的方法。如果你想学习更多Git命令用法,可以移步下面相关的文档:

原文地址:Github

译文地址:http://www.jointforce.com/jfperiodical/article/show/796?m=d03

常用Web Service汇总(天气预报、时刻表等)

现成的Web Service中有很多很好用的,比如天气预报,IP地址搜索,火车时刻表等等。本文汇总的一些常用Web Service,希望对大家有所帮助。

下面总结了一些常用的Web Service,是平时乱逛时收集的,希望对大家有用。

============================================

天气预报Web Service,数据来源于中国气象局

Endpoint

Disco

WSDL

IP地址来源搜索Web Service(是目前最完整的IP地址数据)

Endpoint

Disco

WSDL

随机英文、数字和中文简体字Web Service

Endpoint

Disco

WSDL

中国邮政编码 <-> 地址信息双向查询/搜索Web Service

Endpoint

Disco

WSDL

验证码图片Web Service 支持中文、字母、数字 图像和多媒体

Endpoint

Disco

WSDL

Email 电子邮件地址验证Web Service

Endpoint

Disco

WSDL

中文简体字 <->繁体字转换Web Service

Endpoint

Disco

WSDL

中文 <-> 英文双向翻译Web Service

Endpoint

Disco

WSDL

火车时刻表Web Service (第六次提速最新列车时刻表)

Endpoint

Disco

WSDL

中国股票行情数据Web Service(支持深圳和上海股市的基金、债券和股票)

Endpoint

Disco

WSDL

即时外汇汇率数据Web Service

Endpoint

Disco

WSDL

腾讯QQ在线状态Web Service

Endpoint

Disco

WSDL

中国电视节目预告(电视节目表)Web Service

Endpoint

Disco

WSDL

外汇-人民币即时报价Web Service

Endpoint

Disco

WSDL

中国股票行情分时走势预览缩略图Web Service

Endpoint

Disco

WSDL

国内飞机航班时刻表 Web Service

Endpoint

Disco

WSDL

中国开放式基金数据Web Service

Endpoint

Disco

WSDL

股票行情数据 Web Service(支持香港、深圳、上海基金、债券和股票;支持多股票同时查询)

Endpoint

Disco

WSDL

来源:http://developer.51cto.com/art/200908/147125.htm

WordPress子目录Rewrite的404问题

这些天有个问题一直困扰着我,由于我的WordPress是放在网站的根目录下,因此我建立的一些子目录跑一些其他的应用,我发现这些应用被WordPress的.htaccess文件的RewriteRule所干扰,我费了好大劲修改.htaccess文件,使得子目录的文件的RewriteRule可以工作正常,却发现调用子目录的应用总是返回404状态,但是内容却是正常的。

其实,如果建立一个子目录放Discuz论坛,则论坛的RewriteRule也会被干扰,这个问题实在令人困惑,特别是返回404状态后,所有文件将不会被搜索引擎所收录。

经过一番调试和修改,我发现了一个很怪异的方法可以解决这个问题,就是在子目录的PHP文件中加入下面这一行代码:

header(“Status: 200 OK”);

之后我使用一些HTTP Status测试工具测试,该目录和文件就不再返回404状态了,而是返回200状态,之后我会观察一下该子目录在搜索引擎的收录情况,估计应该也会恢复正常了。WordPress的某些特性实在是令人奇怪,搞不懂为什么会是这样。

WordPress 真的飞不起來

来源: Willin Kan的博客

wordpress

或许有人会告诉你:WordPress 是最好的博客程序!是的,不过那已是以前的事了。

您是否觉得 WordPress 越来越臃肿? 再加上几个插件会更形笨拙。就像一部大客车, 起步相当耗油, 而且换档加速相当慢。 当客人一多, 上下车的时间更是拖慢了整个行程。其中最大的好处, 就是您可以在车上轻松睡一觉, 醒来刚好到站。

所有空间商最头痛的还是 WordPress, 严重消耗了大量内存和 CPU, 以下先引用某空间商的一段话:

WordPress is often using too much CPU Usage, which causes the server to slow down.

You‘ll need to get rid of your WordPress installation or look for an alternative, we want all users to note this issue.

大意是说:WordPress 太耗CPU了,您应该换成其它程序。是的,大家都应该换掉,千万别让WordPress 搞垮您的空间。直到最近,反对的声浪越来越烈,WordPress团队是该彻底检讨了。

以前我还说过 WordPress 将是下一个 IE,但没想到竟来的这么快。近四年来我都在挖掘 WordPress,也写了不少代码简化程序。探究越深, 越觉得无趣,因为它用了很多不需要的动作。 绕了一大圈才进城。尤其是后台用了很多外链链接到官网, 有些是rss, 有些是检查更新, 其实这些都没有必要。

以前还曾发现后台的几则新闻的rss竟高达2M,一般用户对这些资料连看都不看,这样rss有何作用?深窥 WordPress的心态,是借机取得PR,用户链接越多PR越高。终究还是个商业手段,为了赚钱?

有个口号:能用代码解决的,绝不用插件,因为插件太耗资源了。WordPress已具备丰富的函式库, 写插件的门槛非常低,连生手也很容易写出个插件。然而,这个插件耗费多少资源,却没人会去过问。记得 All in one 的插件吗? 因为太耗资源,用过的人都后悔了。现在google一下,还是有人在推荐All in one,为什么?

网络资讯应该要即时更新, 那些旧资料就删了吧! 别再误导别人落入陷阱。

顺便提一下 SBO (看不懂的把 B 改成 E),搜索结果都是十几年前的旧资料。 满城尽说 SBO, 殊不知都是狗屁。现在只要文章中提到SBO, 或是链接到SBO的网站,您的网站保証会被降权,因为会被当成作弊。这也难怪我在这边用 SBO 替代字。

时代一直在进步, 旧资料就别再拿出来害人了, 我这边不再提供以前的代码, 为的是各位的服务器, 请各位谅解。不是那些代码不好,而是WordPress 程序核心太大了,不使用任何插件的情况下都有点吃力,何况再加上插件。

就听我的建议,换个博客程序吧!至于改用哪个,可要自己挑了,我不敢推荐。

文章来源:Willin Kan 的博客

WordPress文章ID不连续的解决方法

2012-03-28 16:47  来源: congblog.cn

最近看到有许多朋友提到“WordPress文章ID不连续”怎么办?,其实笔者刚刚接触WordPress的时候就发现了这个问题,所以也是一开始就把WordPress的自动保存以及文章修订版本功能隐蔽。但是固定连接如果不用postid命名的话也许发现不了,但是大葱一直使用的就是这种固定链接。如果你网速不佳的时候,这会影响到文章的编辑以及发表页面的载入速度;另外每一次自动保存的文章草稿它都会自动写入我们的数据库,这样的话无形之中也就大大了数据库的储存,冗余数据太多的话也会影响到数据库的工作效率,另外也就是我们之前提到的文章ID不连续。

WordPress仪表盘并没有直接的提供关闭这个功能的选项,那么今天就给大家讲讲如何把这个功能完完全全的隐蔽掉。

方法1:

WordPress默认是每60秒就会对文章进行自动保存,我个人是觉得太频繁了,那么我们可以打开博客根目录下的wp-config.php文件,搜索“require_once(ABSPATH . ‘wp-settings.php’);”在其前面/上面添加如下代码:

//自动保存10小时一次

define(‘AUTOSAVE_INTERVAL’, 36000);

//取消自动修订版

define(‘WP_POST_REVISIONS’,false);

方法2:

代码来源于国外网站,使用环境:WordPress 3.3.1,原理上 3.0 以上都支持,WP3.0.x 大葱没有进行测试。在我们当前使用主题的 functions.php 文件加入如下代码即可:

/* 取消自动保存和修订版本 */

remove_action(‘pre_post_update’, ‘wp_save_post_revision’);

add_action(‘wp_print_scripts’, ‘disable_autosave’);

function disable_autosave() {

wp_deregister_script(‘autosave’);

}

清理数据库中以前的文章历史修订版本

自动保存和修订版本我们都解决了,接下来我们进行删除数据库中的冗余文章和修订版本,数据库操作之前大葱建议大家先进行备份。我们登录phpmyadmin 中进行数据库管理,SQL语句命令行中写入以下运行代码执行(如果更改了数据库表名的前缀,需要将数据表名称中wp改成你的前缀):

delete from wp_posts where post_type=’revision’;

本文固定链接:http://www.congblog.cn/878.html | 大葱博客

WordPress支持用户以Twitter帐号登录评论

据国外媒体报道,博客网站WordPress.com今日宣布,用户今后可通过登录Twitter或Facebook账户在WordPress网站上发表评论。

分析称尽管表面看来这仅是非常小的新功能,但它同时也为WordPress博客网站提供更多的发表评论的机会。第三方评论系统如Disqus以及Echo等网站目前均允许用户通过Twitter或Facebook账户发表评论,但这一功能对于博客平台网站来说是一个巨大的进步。

WordPress高层斯科特·波尔克(Scott Berkun)在官方博客中表示,新的登录系统可以使用户在同一时间登录不同的服务网站。这对于一些想通过Twitter或Facebook账户发表评论的用户来说非常方便。

据悉,用户通过Twitter或Facebook账户发表的评论将不会在社交网站上显示。该登陆方式仅是WordPress验证用户身份的方式。WordPress将来有可能会允许用户在Facebook或Twitter上分享及发布评论。

WordPress博客用户通过Facebook及Twitter发表的评论将会通过由Automattic开发的JetPack插件进行验证。

WordPress CKEditor smiley表情图标定制化

WordPress默认的编辑器不是很好用,我习惯将它替换成CKEditor,安装CKEditor For WordPress即可。安装后编辑器将被替换,评论框的编辑器默认会被替换成CKEditor,有时候会导致模板样式错乱,可以在CKEditor->Basic Settings中禁用。

CKEditor默认的smiley表情不适合中国,我们可以将表情改造一下换成自己喜欢的表情,方法如下。

1. 下载你想要的表情包,一般是gif格式的图片,假设这些图片所在的文件夹叫mysmiley,将该文件夹拷贝到插件的表情目录中,路径为

wp-content/plugins/ckeditor-for-wordpress/ckeditor/plugins/smiley/images

2. 修改配置文件wp-content/plugins/ckeditor-for-wordpress/ckeditor.config.js

在CKEDITOR.editorConfig = function(config) { … }中添加如下代码

config.smiley_path=CKEDITOR.basePath+’plugins/smiley/images/mysmiley/’;

config.smiley_images=[‘1.gif’,’2.gif’];

第一行代码定义了表情文件所在的文件夹的路径,第二行是表情文件名字的数组。这样点击编辑器的表情按钮,自定义图标就会显示出来。

当你的图标过多时,由于显示不开会导致一部分图标无法显示,而且表情图标对话框没有滚动条,为了避免这种问题,我们可以修改一下css文件。找到wp-content/plugins/ckeditor-for-wordpress/ckeditor/skins/kama/dialog.css(假定你使用了默认的皮肤kama),在最后一行添加如下代码

.cke_dialog_ui_html{height:350px;overflow:auto;}

这个文件是经过压缩的,所以添加代码时注意不要有空格。height定义对话框的高度,可以根据自己的需要写。这样表情多时会出现滚动条,就可以正常使用了。如果你的表情很少,就不要做这个改动,不然显示会出问题。

下面是几个已经做好的表情包和大家分享一下,配置语句写在压缩包的readme.txt中

1. 洋葱头系列表情

下载地址:CKEditor洋葱头表情图标下载

2. qq表情图标

下载地址:CKEditor QQ表情

基于xmlrpc的PingBack 规范形象介绍 在wordpress中广泛使用

Pingback是在博客圈的背景下诞生的一个新鲜玩意,说白了,其目的等同于csdn上的trackback。不过,它有更加完善的机制,而且用php很容易实现。
传统的博客是这样的,我写了篇很牛X的文章,你不巧看到了,但是我的观点你不同意,而且更不巧,你还非常喜欢抬杠,为了能和我抬杠,你就得在我的博客上留言,而且你啰里啰唆的打了1000个字上去。问题是,我不喜欢抬杠,所以我限制留言字数为100字。
问题来了,如果你对我的博客文章有几K个字节的感想,单单发表在我的博客评论里显得有点屈才。你可以在你的博客里重新发布一篇文章,为了让我知道你的大作,以便我们抬杠,你还得给我发封电子邮件告诉我。这个流程虽然不怎么复杂,但还是很麻烦。
Pingback简化了这个流程,只要在你啰里啰唆的评论里加个超链接,指向我的文章。我就会收到有关你的评论,而且还会自动显示在我的博客评论中。
神奇吧,神奇的背后是老瓶装新酒。要理解Pingback,最好有点web服务的知识,不知道也没关系,所谓web服务就是俩服务器之间没事倒腾数据玩,当然,俩服务器得使用同一种语言进行交流。目前,有两种倒腾语言,SOAP和xmlrpc,php5已全面支持。SOAP稳定可靠,但是很复杂,xmlrpc就简单实用的多。pingback就是基于xmlrpc实现的。
来看看具体的操作流程:
1、首先我发布文章,我的文章地址是:http://www.renseng.com/learning/dede-cms-remove-page-index-html.html。如果你乐意打开这个网页,并看下源文件,会注意到,在页面上有个link元素,内容是<link rel=”pingback” href=”http://www.renseng.com/xmlrpc.php” />。这玩意标示了一个pingback服务器的地址:http://www.renseng.com/xmlrpc.php。
2、然后你看了文章,开始在你的博客写你的啰里啰唆的评论,评论一开始可能会这样:carche在<a href=”http://www.renseng.com/learning/dede-cms-remove-page-index-html.html”>CURL……….</a>中 提到,我对此不敢苟同…………….
3、之后提交你的文章,如果你的blog系统是wordpress架构,当你提交文章之后,wordpress会扫描你文章中提到的链接,这时它发现了http://www.renseng.com/learning/dede-cms-remove-page-index-html.html这个链接。wordpress会抓取这篇文章,然后用一个像这样的正则表达式  ”/<link\s+rel=\”?pingback\”?\s+href=\”?(^>*)\”?\s+>/” 来寻找pingback服务器地址,找到这个地址后,开始倒腾数据。
4、你的blog系统会给找到的pingback服务器发送以下信息:你好,在某某博客文章中曾经引用了http://www.renseng.com/learning/dede-cms-remove-page-index-html.html这个超链接。
5、我的pingback服务器收到信息之后,先检查一下是不是确实有这回事,如果是就返回随便什么字符;如果不是,就返回一段错误码。之后,我的blog系统会根据你请求的信息,到你的评论里面抓取内容,并显示在我的blog评论中。
大致流程是这样的,如果要看详细的规范,参考:http://www.hixie.ch/specs/pingback/pingback

服务员冲厨房喊道:"出来个师傅,帮这位顾客把这块牛肉切一下!"

男子喊道:"服务员,过来一下!" 服务员:"您好,什么事?" 男子怒问:"我20块钱一碗的牛肉面,怎么才一块牛肉?" 服务员:"先生,那您希望有几块?" 男子想了想说:"怎么也得五六块牛肉吧。" 服务员冲厨房喊道:"出来个师傅,帮这位顾客把这块牛肉切一下!"

Live Writer WordPress XMLRPC Error Invalid

The response to the metaWeblog.newPost method received from the weblog server was invalid:
Invalid response document returned from XmlRpc server

前几天用lw发文章突然出现上面错误,在网上找了很多方法都没能解决,最后仔细回想,我最近更改了主题

我查看了最近更改的所有文件,最后发现 functions.php文件是UTF-8 带BOM的,赶紧移除,问题解决。

小结:用windows 平台编辑UTF-8 编码文件,上传之前最后做一下移除BOM的操作,要不出现问题时找起来很费力气,因为程式是没有问题的