Json Web Token 详解 [转]

Understanding JWT

JSON Web Tokens (JWT) are a standard way of representing security claims between the add-on and the Atlassian host product. A JWT token is simply a signed JSON object which contains information which enables the receiver to authenticate the sender of the request.

Table of contents

Structure of a JWT token

A JWT token looks like this:


Once you understand the format, it’s actually pretty simple:

<base64url-encoded header>.<base64url-encoded claims>.<base64url-encoded signature>

In other words:

  • You create a header object, with the JSON format. Then you encode it in base64url
  • You create a claims object, with the JSON format. Then you encode it in base64url
  • You create a signature for the URI (we’ll get into that later). Then you encode it in base64url
  • You concatenate the three items, with the “.” separator

You shouldn’t actually have to do this manually, as there are libraries available in most languages, as we describe in the JWT libraries section. However it is important you understand the fields in the JSON header and claims objects described in the next sections:


The header object declares the type of the encoded object and the algorithm used for the cryptographic signature. Atlassian Connect always uses the same values for these. The typ property will be “JWT” and the alg property will be “HS256”.

Attribute Type Description
“typ” String Type for the token, defaulted to “JWT”. Specifies that this is a JWT token
“alg” (mandatory) String Algorithm. specifies the algorithm used to sign the token. In atlassian-connect version 1.0 we support the HMAC SHA-256 algorithm, which the JWT specificationidentifies using the string “HS256”.


Your JWT library or implementation should discard any tokens which specify alg: none as this can provide a bypass of the token verification.


The claims object contains security information about the message you’re transmitting. The attributes of this object provide information to ensure the authenticity of the claim. The information includes the issuer, when the token was issued, when the token will expire, and other contextual information, described below.

“iss”: “jira:1314039”,
“iat”: 1300819370,
“exp”: 1300819380,
“qsh”: “8063ff4ca1e41df7bc90c8ab6d0f6207d491cf6dad7c66ea797b4614b71922e9”,
“sub”: “batman”,
“context”: {
“user”: {
“userKey”: “batman”,
“username”: “bwayne”,
“displayName”: “Bruce Wayne”
Attribute Type Description
iss(mandatory) String the issuer of the claim. Connect uses it to identify the application making the call. for example:

  • If the Atlassian product is the calling application: contains the unique identifier of the tenant. This is the clientKey that you receive in theinstalled callback. You should reject unrecognised issuers.
  • If the add-on is the calling application: the add-on key specified in the add-on descriptor
iat(mandatory) Long Issued-at time. Contains the UTC Unix time at which this token was issued. There are no hard requirements around this claim but it does not make sense for it to be significantly in the future. Also, significantly old issued-at times may indicate the replay of suspiciously old tokens.
exp(mandatory) Long Expiration time. It contains the UTC Unix time after which you should no longer accept this token. It should be after the issued-at time.
qsh(mandatory) String query string hash. A custom Atlassian claim that prevents URL tampering.
sub(optional) String The subject of this token. This is the user associated with the relevant action, and may not be present if there is no logged in user.
aud(optional) String or String[] The audience(s) of this token. For REST API calls from an add-on to a product, the audience claim can be used to disambiguate the intended recipients. This attribute is not used for JIRA and Confluence at the moment, but will become mandatory when making REST calls from an add-on to e.g. the bitbucket.org domain.
context(optional) Object The context claim is an extension added by Atlassian Connect which may contain useful context for outbound requests (from the product to your add-on). The current user (the same user in the sub claim) is added to the context. This contains the userKey, username and display name for the subject.

"context": {
    "user": {
        "userKey": "batman",
        "username": "bwayne",
        "displayName": "Bruce Wayne"
  • userKey — the primary key of the user. Anytime you want to store a reference to a user in long term storage (eg a database or index) you should use the key because it can never change. The user key should never be displayed to the user as it may be a non human readable value.
  • username — a unique secondary key, but should not be stored in long-term storage because it can change over time. This is the value that the user logs into the application with, and may be displayed to the user.
  • displayName — the user’s name.

You should use a little leeway when processing time-based claims, as clocks may drift apart. The JWT specification suggests no more than a few minutes. Judicious use of the time-based claims allows for replays within a limited window. This can be useful when all or part of a page is refreshed or when it is valid for a user to repeatedly perform identical actions (e.g. clicking the same button).


The signature of the token is a combination of a hashing algorithm combined with the header and claims sections of the token. This provides a way to verify that the claims and headers haven’t been been compromised during transmission. The signature will also detect if a different secret is used for signing. In the JWT spec, there are multiple algorithms you can use to create the signature, but Atlassian Connect uses the HMAC SHA-256 algorithm. If the JWT token has no specified algorithm, you should discard that token as they’re not able to be signature verified.

JWT libraries

Most modern languages have JWT libraries available. We recommend you use one of these libraries (or other JWT-compatible libraries) before trying to hand-craft the JWT token.

Language Library
Java atlassian-jwt and jsontoken
Python pyjwt
Node.js node-jwt-simple
Ruby ruby-jwt
PHP firebase php-jwt and luciferous jwt
.NET jwt
Haskell haskell-jwt

The JWT decoder is a handy web based decoder for Atlassian Connect JWT tokens.

Creating a JWT token

Here is an example of creating a JWT token, in Java, using atlassian-jwt and nimbus-jwt (last tested with atlassian-jwt version 1.5.3 and nimbus-jwt version 2.16):

import java.io.UnsupportedEncodingException;
import java.security.NoSuchAlgorithmException;
import java.util.HashMap;
import com.atlassian.jwt.*;
import com.atlassian.jwt.core.writer.*;
import com.atlassian.jwt.httpclient.CanonicalHttpUriRequest;
import com.atlassian.jwt.writer.JwtJsonBuilder;
import com.atlassian.jwt.writer.JwtWriterFactory;public class JWTSample {

public String createUriWithJwt()
throws UnsupportedEncodingException, NoSuchAlgorithmException {
long issuedAt = System.currentTimeMillis() / 1000L;
long expiresAt = issuedAt + 180L;
String key = “atlassian-connect-addon”; //the key from the add-on descriptor
String sharedSecret = “…”;    //the sharedsecret key received
//during the add-on installation handshake
String method = “GET”;
String baseUrl = “https://<my-dev-environment>.atlassian.net/”;
String contextPath = “/”;
String apiPath = “/rest/api/latest/serverInfo”;

JwtJsonBuilder jwtBuilder = new JsonSmartJwtJsonBuilder()

CanonicalHttpUriRequest canonical = new CanonicalHttpUriRequest(method,
apiPath, contextPath, new HashMap());
JwtClaimsBuilder.appendHttpRequestClaims(jwtBuilder, canonical);

JwtWriterFactory jwtWriterFactory = new NimbusJwtWriterFactory();
String jwtbuilt = jwtBuilder.build();
String jwtToken = jwtWriterFactory.macSigningWriter(SigningAlgorithm.HS256,

String apiUrl = baseUrl + apiPath + “?jwt=” + jwtToken;
return apiUrl;

Decoding and verifying a JWT token

Here is a minimal example of decoding and verifying a JWT token, in Java, using atlassian-jwt and nimbus-jwt (last tested with atlassian-jwt version 1.5.3 and nimbus-jwt version 2.16).

NOTE: This example does not include any error handling. See AbstractJwtAuthenticator from atlassian-jwt for recommendations of how to handle the different error cases.

import com.atlassian.jwt.*;
import com.atlassian.jwt.core.http.JavaxJwtRequestExtractor;
import com.atlassian.jwt.core.reader.*;
import com.atlassian.jwt.exception.*;
import com.atlassian.jwt.reader.*;
import javax.servlet.http.HttpServletRequest;
import java.io.UnsupportedEncodingException;
import java.security.NoSuchAlgorithmException;
import java.util.Map;

public class JWTVerificationSample {

public Jwt verifyRequest(HttpServletRequest request,
JwtIssuerValidator issuerValidator,
JwtIssuerSharedSecretService issuerSharedSecretService)
throws UnsupportedEncodingException, NoSuchAlgorithmException,
JwtVerificationException, JwtIssuerLacksSharedSecretException,
JwtUnknownIssuerException, JwtParseException {
JwtReaderFactory jwtReaderFactory = new NimbusJwtReaderFactory(
issuerValidator, issuerSharedSecretService);
JavaxJwtRequestExtractor jwtRequestExtractor = new JavaxJwtRequestExtractor();
CanonicalHttpRequest canonicalHttpRequest
= jwtRequestExtractor.getCanonicalHttpRequest(request);
Map requiredClaims = JwtClaimVerifiersBuilder.build(canonicalHttpRequest);
String jwt = jwtRequestExtractor.extractJwt(request);
return jwtReaderFactory.getReader(jwt).readAndVerify(jwt, requiredClaims);

Decoding a JWT token

Decoding the JWT token reverses the steps followed during the creation of the token, to extract the header, claims and signature. Here is an example in Java:

String jwtToken = …;//e.g. extracted from the request
String[] base64UrlEncodedSegments = jwtToken.split(‘.’);
String base64UrlEncodedHeader = base64UrlEncodedSegments[0];
String base64UrlEncodedClaims = base64UrlEncodedSegments[1];
String signature = base64UrlEncodedSegments[2];
String header = base64Urldecode(base64UrlEncodedHeader);
String claims = base64Urldecode(base64UrlEncodedClaims);

This gives us the following:


“alg”: “HS256”,
“typ”: “JWT”


“iss”: “jira:15489595”,
“iat”: 1386898951,
“qsh”: “8063ff4ca1e41df7bc90c8ab6d0f6207d491cf6dad7c66ea797b4614b71922e9”,



Verifying a JWT token

JWT libraries typically provide methods to be able to verify a received JWT token. Here is an example using nimbus-jose-jwt and json-smart:

import com.nimbusds.jose.JOSEException;
import com.nimbusds.jose.JWSObject;
import com.nimbusds.jose.JWSVerifier;
import com.nimbusds.jwt.JWTClaimsSet;
import net.minidev.json.JSONObject;public JWTClaimsSet read(String jwt, JWSVerifier verifier) throws ParseException, JOSEException
JWSObject jwsObject = JWSObject.parse(jwt);

if (!jwsObject.verify(verifier))
throw new IllegalArgumentException(“Fraudulent JWT token: ” + jwt);

JSONObject jsonPayload = jwsObject.getPayload().toJSONObject();
return JWTClaimsSet.parse(jsonPayload);

Creating a query string hash

A query string hash is a signed canonical request for the URI of the API you want to call.

qsh = `sign(canonical-request)`
canonical-request = `canonical-method + '&' + canonical-URI + '&' + canonical-query-string`

A canonical request is a normalised representation of the URI. Here is an example. For the following URL, assuming you want to do a “GET” operation:

"https://<my-dev-environment>.atlassian.net/path/to/service?zee_last=param&repeated=parameter 1&first=param&repeated=parameter 2"

The canonical request is


To create a query string hash, follow the detailed instructions below:

  1. Compute canonical method
    • Simply the upper-case of the method name (e.g. "GET" or "PUT")
  2. Append the character '&'
  3. Compute canonical URI
    • Discard the protocol, server, port, context path and query parameters from the full URL.
      • For requests targeting add-ons discard the baseUrl in the add-on descriptor.
    • Removing the context path allows a reverse proxy to redirect incoming requests for"jira.example.com/getsomething" to "example.com/jira/getsomething" without breaking authentication. The requester cannot know that the reverse proxy will prepend the context path"/jira" to the originally requested path "/getsomething"
    • Empty-string is not permitted; use "/" instead.
    • Url-encode any '&' characters in the path.
    • Do not suffix with a '/' character unless it is the only character. e.g.
      • Canonical URI of "https://example.atlassian.net/wiki/some/path/?param=value" is"/some/path"
      • Canonical URI of "https://example.atlassian.net" is "/"
  4. Append the character '&'
  5. Compute canonical query string
    • The query string will use percent-encoding.
    • Sort the query parameters primarily by their percent-encoded names and secondarily by their percent-encoded values.
    • Sorting is by codepoint: sort(["a", "A", "b", "B"]) => ["A", "B", "a", "b"]
    • For each parameter append its percent-encoded name, the '=' character and then its percent-encoded value.
    • In the case of repeated parameters append the ',' character and subsequent percent-encoded values.
    • Ignore the jwt parameter, if present.
    • Some particular values to be aware of:
      • A whitespace character is encoded as "%20",
      • "+" as "%2B",
      • "*" as "%2A" and
      • "~" as "~".
        (These values used for consistency with OAuth1.)
  6. Convert the canonical request string to bytes
    • The encoding used to represent characters as bytes is UTF-8
  7. Hash the canonical request bytes using the SHA-256 algorithm
    • e.g. The SHA-256 hash of "foo" is `”2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae

Advanced: Creating a JWT token manually


You should only need to read this section if you are planning to create JWT tokens manually, i.e. if you are not using one of the libraries listed in the previous section

More details on JWT tokens

The format of a JWT token is simple: <base64url-encoded header>.<base64url-encoded claims>.<signature>.

  • Each section is separated from the others by a period character (.).
  • Each section is base64url encoded, so you will need to decode each one to make them human-readable. Note that encoding with base64 and not base64url will result in an incorrect JWT token for payloads with non UTF-8 characters.
  • The header specifies a very small amount of information that the receiver needs in order to parse and verify the JWT token.
    • All JWT token headers state that the type is “JWT”.
    • The algorithm used to sign the JWT token is needed so that the receiver can verify the signature.
  • The claims are a list of assertions that the issuer is making: each says that “this named field” has “this value”.
    • Some, like the “iss” claim, which identifies the issuer of this JWT token, have standard names and uses.
    • Others are custom claims. We limit our use of custom claims as much as possible, for ease of implementation.
  • The signature is computed by using an algorithm such as HMAC SHA-256 plus the header and claims sections.
    • The receiver verifies that the signature must have been computed using the genuine JWT header and claims sections, the indicated algorithm and a previously established secret.
    • An attacker tampering with the header or claims will cause signature verification to fail.
    • An attacker signing with a different secret will cause signature verification to fail.
    • There are various algorithm choices legal in the JWT spec. In atlassian-connect version 1.0 we support HMAC SHA-256. Important: your implementation should discard any JWT tokens which specify alg: none as these are not subject to signature verification.

Steps to Follow

  1. Create a header JSON object
  2. Convert the header JSON object to a UTF-8 encoded string and base64url encode it. That gives you encodedHeader.
  3. Create a claims JSON object, including a query string hash
  4. Convert the claims JSON object to a UTF-8 encoded string and base64url encode it. That gives you encodedClaims.
  5. Concatenate the encoded header, a period character (.) and the encoded claims set. That gives you signingInput = encodedHeader+ “.” + encodedClaims.
  6. Compute the signature of signingInput using the JWT or cryptographic library of your choice. Then base64url encode it. That gives you encodedSignature.
  7. concatenate the signing input, another period character and the signature, which gives you the JWT token. jwtToken = signingInput + “.” + encodedSignature


Here is an example in Java using gson, commons-codec, and the Java security and crypto libraries:

public class JwtClaims {
protected String iss;
protected long iat;
protected long exp;
protected String qsh;
protected String sub;
// + getters/setters/constructors
public class JwtHeader {
protected String alg;
protected String typ;
// + getters/setters/constructors
import static org.apache.commons.codec.binary.Base64.encodeBase64URLSafeString;
import static org.apache.commons.codec.binary.Hex.encodeHexString;
import java.io.UnsupportedEncodingException;
import java.security.*;
import javax.crypto.*;
import javax.crypto.spec.SecretKeySpec;
import com.google.gson.Gson;
public class JwtBuilder {
public static String generateJWTToken(String requestUrl, String canonicalUrl,
String key, String sharedSecret)
throws NoSuchAlgorithmException, UnsupportedEncodingException,
InvalidKeyException {
JwtClaims claims = new JwtClaims();
claims.setIat(System.currentTimeMillis() / 1000L);
claims.setExp(claims.getIat() + 180L);
String jwtToken = sign(claims, sharedSecret);
return jwtToken;
private static String sign(JwtClaims claims, String sharedSecret)
throws InvalidKeyException, NoSuchAlgorithmException {
String signingInput = getSigningInput(claims, sharedSecret);
String signed256 = signHmac256(signingInput, sharedSecret);
return signingInput + “.” + signed256;
private static String getSigningInput(JwtClaims claims, String sharedSecret)
throws InvalidKeyException, NoSuchAlgorithmException {
JwtHeader header = new JwtHeader();
header.alg = “HS256”;
header.typ = “JWT”;
Gson gson = new Gson();
String headerJsonString = gson.toJson(header);
String claimsJsonString = gson.toJson(claims);
String signingInput = encodeBase64URLSafeString(headerJsonString
+ “.”
+ encodeBase64URLSafeString(claimsJsonString.getBytes());
return signingInput;
private static String signHmac256(String signingInput, String sharedSecret)
throws NoSuchAlgorithmException, InvalidKeyException {
SecretKey key = new SecretKeySpec(sharedSecret.getBytes(), “HmacSHA256”);
Mac mac = Mac.getInstance(“HmacSHA256”);
return encodeBase64URLSafeString(mac.doFinal(signingInput.getBytes()));
private static String getQueryStringHash(String canonicalUrl)
throws NoSuchAlgorithmException,UnsupportedEncodingException {
MessageDigest md = MessageDigest.getInstance(“SHA-256”);
byte[] digest = md.digest();
return encodeHexString(digest);
public class Sample {
public String getUrlSample() throws Exception {
String requestUrl =
String canonicalUrl = “GET&/rest/atlassian-connect/latest/license&”;
String key = “…”;     //from the add-on descriptor
//and received during installation handshake
String sharedSecret = “…”; //received during installation Handshake
String jwtToken = JwtBuilder.generateJWTToken(
requestUrl, canonicalUrl, key, sharedSecret);
String restAPIUrl = requestUrl + “?jwt=” + jwtToken;
return restAPIUrl;

Stateless Authentication implementation using JWT, Nginx+Lua and Memcached

If you already have an idea on stateless authentication and JWT then proceed with this implementation blog otherwise just go through the previous blog Stateless Authentication to get an idea.

As i mentioned in my previous blog JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA.

Client can access the the resources from different applications. So to validate the token at applications, we require the secret or a public/private key.

Problems of validating the token in every application

  1. We have to maintain the secret key in all the applications and have to write or inject the token validation logic in every application. The validation logic may include more than token validation like fingerprint mismatch, session idle time out and many more based on the requirement.
  2. If the applications are developed in different languages then we have to implement the token validation logic based on application technology stack and maintenance is very difficult.


Instead of maintaining the validation logic in every application, we can write our validation logic at one common place so that every request can make use of that logic irrespective of application (Note: Here Applications could be developed in any language). I have chosen reverse proxy server (Nginx) to maintain the validation logic with the help of Lua.


  1. We don’t need to maintain the secret or private/public key in every application. Just maintain at authentication server side to generate a token and at proxy server (Nginx) to validate the token.
  2. Maintenance of the validation logic easy.

Before jumping in to the flow and implementation let’s see why we have chosen this technology stack.

Why JWT ? 

To achieve the stateless authentication we have chosen JWT (JSON Web Token). We can easily, securely transmitting information between parties as a JSON object. If we want to put some sensitive information in JWT token, we can encrypt the JWT payload itself using the JSON Web Encryption (JWE) specification.

Why Nginx + Lua ?

Nginx+Lua is a self-contained web server embedding the scripting language Lua. Powerful applications can be written directly inside Nginx without using cgi, fastcgi, or uwsgi. By adding a little Lua code to an existing Nginx configuration file, it is easy to add small features.

One of the core benefits of Nginx+Lua is that it is fully asynchronous. Nginx+Lua inherits the same event loop model that has made Nginx a popular choice of webserver. “Asynchronous” simply means that Nginx can interrupt your code when it is waiting on a blocking operation, such as an outgoing connection or reading a file, and run the code of another incoming HTTP Request.

Why Memcached ?

To keep the application more secured, along with the token validation we are doing the fingerprint check and handling idle time out as well. Means, if the user is idle for some time and not doing any action then user has to be logged out from the application. To do the fingerprint check and idle time out check, some information needs to be shared across the applications. To share the information across the applications we have chosen Memcached (Distributed Cache).

Note: If you don’t want to do fingerprint mismatch check and idle time out check, then you can simply ignore the Memcached component from the flow.



Untitled presentation (2)


Step 1

Client try to access the resource from the application with out JWT token or invalid token. As shown in the flow, request goes to the proxy server (Nginx).

Step 2

Nginx looks for the auth header (X-AUTH-TOKEN) and validates the token with the help of Lua.


Step 3

As token is not present or invalid, nginx sends below response to the client.


Step 4

Now user has to login in to the system, So client will load the login page.

Step 5

Client will send a request to the authenticate server to authenticate the user. Along with username and password client sends the fingerprint also. Here we are considering fingerprint to make sure that all the requests are initiating from the same device where user logged in to the system.

Sample authenticate request body


Step 6

Authenticate server validates the credentials and create a JWT token with TokenId (random generated UUID) as a claim and this tokenId is useful to uniquely identify the user. And set the JWT token in response header (X-AUTH-TOKEN).

Create JWT Token

Add this dependency to your pom.xml to work on JWT

While creating the token you can set any number of claims.


Generated JWT token looks like below

And the JWT token payload looks like below. You can put what ever data you want like roles & permissions associated to him and so on…


Step 7

Put TokenId as a key and user meta information like fingerprint, last access time etc… as a value in memcached which is useful to verify the fingerprint and session idle time out at nginx side using Lua.

Sample Memcached content


Put Content in Memcached

Add this dependency to your pom.xml to work on Memcached



Step 8

Send back response to the client from authentication server with response header X-AUTH-TOKEN


Step 9

Fetch the token from response header and store it in local storage at client side. So that we can send this token in request header from next request onwards.

Step 10

Now client access the resource from application with valid JWT token. As shown in the flow request goes to the proxy server (Nginx). With every request client will send a fingerprint in some header and consider header name as “FINGER-PRINT”.

Step 11

Nginx validates the token. As token is valid, extract the TokenId from the JWT token to fetch the user meta information from memcached.

If there is no entry in the memcached with “TokenId” then Nginx simply senda a response as “LOGGED_OUT” to the client.

But in our case user is logged in into the system, So there will be an entry in memcached with TokenId. So fetch that user meta information to do the following checks.

Fingerprint mismatch : While sending the authenticate request, client is sending fingerprint along with username and password. We are storing that fingerprint value in memcached and we use this value to compare with the fingerprint which is coming in every request. If fingerprint matches, then it’s proceed further. Otherwise nginx will send a response to client saying that fingerprint is mismatched.

Session idle time out :  While successful authentication of a user at authentication server side, we are putting configured session_idle_timeout of a user in memcached. If it’s configured as “-1”, then we simply skip the session idle time out check. Otherwise for every request just we check whether session is idle or not. If session is not idle, we update the last_access_time value to current system time in memcached. If session is idle then Nginx send below response to the client.

Complete Validation Logic at Nginx using Lua


Step 12

Once the request gone through the above mentioned validation logic, Nginx proxy_pass the request to the application.


Step 13

Application sends a response of requested resource to the client.

How to achieve logout ?

There is a open question (unanswered) regarding how to achieve the log out at server side, if we go by the stateless authentication using JWT.

Mostly people are discussing about handling the log out at client side.

  • When user clicks on logout, simply client can remove the token from local storage.

But i come up with a solution to achieve the logout at server side by make use of Memcached.

  • When user clicks on logout, Remove the entry from Memcached which  we put it in Step 7. And client also can delete the token from local storage as well. If you see the validation logic which i have completely covered in Step 11, there i’m checking the entry in memcached. If there is no entry in memcached, means user logged out from the application.

2016 最佳 Linux 发行版排行榜

2015年,不管在企业市场还是个人消费市场都是 Linux非常重要的一年。作为一个自2005年起就开始使用 Linux的 Linuxer ,我门见证了 Linux在过去十年的成长。2016 Linux 将更加精彩,所以我们选择了一些大放异彩的发行版。现在 Linux Story小编就带你去领略一下各领域的风采吧!


openSUSE 背后的 SUSE 公司是最老的 Linux 企业,它成立于 Linus Torvalds 宣布放出 Linux 的一年后。它其实早于 Red Hat 的诞生,它也是社区主导的发行版 openSUSE 的赞助商。

在2015,openSUSE 团队决定靠拢 SUSE Linux 企业版(SLE)以便用户可以共享企业服务版本的 DNA ,就像 CentOS 和 Ubuntu一样。之后,openSUSE 变成了 openSUSE Leap,直接基于 SLESP1 。这两个发行版将共享代码库以互惠互利,SUSE 将吸取 openSUSE 的优秀内容,反之亦然。通过这一举措,openSUSE 也抛弃了常规的发行周期,一个新的版本将和 SLE 保持一致。这意味着每个版本将有更长的生命周期。这一举措的结果是 openSUSE 将变成一个非常重要的发行版,因为潜在的 SLE 用户可以使用 openSUSE Leap。然而,这还不是全部,openSUSE 同时发布了一个纯粹的滚动发行版—— Tumbleweed 。可以参考Linux Story闻其详撰写的这篇文章《生命、宇宙以及Linux 系统的终极答案? openSUSE Leap 42.1 华丽发布》,所以现在用户可以使用超稳定的 openSUSE Leap和 始终保持最新的 openSUSE Tumbleweed。


最可定制的发行版: Arch Linux

Arch Linux是现阶段最好的滚动发行版,好吧,我可能因为我是 Arch Linux用户而产生了偏见。更重要的是 Arch 在其他方面也表现良好,这也是为什么我选择它作为我的操作系统的原因。

Arch Linux 是一个为那些想了解 Linux 一切的人准备的发行版,因为你必须手动安装一切,它会让你学会基于 Linux 的操作系统的每个部分。Arch Linux 是最可定制的发行版,你获得的只是一个基础系统,然后你可以在它上面建立属于你个人的发行版。不论好坏,它都不像 openSUSE 和 Ubuntu,它没有额外的补丁和整合内容,你甚至可以获得上游开发者创建的内容。Arch Linux 也是最好的滚动发行版之一。他总是更新,用户始终使用最新的软件包,并且他们还可以通过稳定的存储库运行预发布软件。Arch 也因优异的文档闻名。 Arch Wiki 可以让我得到任何 Linux 相关的资料。Arch 中我最喜欢的内容是它提供的所有的包和软件都可在“任何” Linux 发行版上运行。感谢 Arch User Repository(AUR)。

最好看的发行版:elementary OS

不同的 Linux 发行版有不同的侧重点,在大多数情况下这都是技术差异。在很多 Linux 发行版中外观和感觉是无足轻重的——更像是一个边缘项目。不管什么角度,Linux Story 一直觉得它是一个非常漂亮的系统。

elementary OS正试图改变这一切。在它里面,设计走在了前列,其原因是很明显的。该发行版漂亮的图标是 Linux 世界闻名的设计师们设计开发的。elementary OS非常严格要求整体的外观和感觉。开发者已经创建了包括桌面环境在内的自己的组件,此外,他们只选择那些符合自己设计模式的应用程序。可以在该系统上看到 Mac OS X 的影子。


Solus操作系统最近已经获得了相当多的关注,它是一个从头开始创建的前瞻性操作系统。它并不是 Debian 或 Ubuntu的衍生物。它搭配了为集成 GNOME 从头开始构建的 Budgie 桌面环境。Solus 有和 Google Chrome OS相同的极简主义方法。Linux Story 完全认同 Solus 为最佳新人。

我没有使用太多 Solus,但它看起来很有希望。 Solus 不是一个“新的”操作系统,它曾经以不同的形式和名称存在。但是整个项目的新名称是在2015年才提出的。

最好的教育操作系统:ezgo Linux

ezgo是一套开源、公益、免费、面向教育的电脑操作系统,基于Linux 而开发,它包含有丰富的互动教学软件和开放教材、知识,涵盖了物理、化学、地理、天文、 生物、数学、计算机等学科,矢志帮助学校的学生和教师的教育信息化,帮助孩子们和家长、老师以最方便最有效的方式接触、获取全世界最先进的知识和智慧,这 是一个发源于台湾的开源项目,目前在国内是ezgo中国社区,重庆Linux用户组 ChongqingLUG 在维护、开发和推广。搜集了包括 PhET在内的大量开源教材,Linux Story 有幸也曾经报道过跟 ezgo 有关的消息,它的官方网站是 http://ezgolinux.org/。关心教育的家长、学生和老师值得关注。

最好的云操作系统:Chrome OS

Chrome OS不是一个典型的基于 Linux 的发行版,因为它是一个为在线活动设计的基于浏览器的操作系统。而且,由于它基于 Linux 同时它的源码是供所有人编译,所以它也很有吸引力。我每天都使用 Chrome OS ,这是一个对纯粹为网络活动而设计的极好的,免维护的,不断更新的操作系统。Chrome OS 和 Android 一起值得所有的新人来实现 PC 和其他平台的 Linux 普及。Linux Story 曾经也试用过 Acer Chromebook 11,感觉相当不错。

最好的笔记本操作系统:Ubuntu MATE

大多数笔记本没有非常高端的硬件,如果你正在运行一个非常消耗资源的桌面环境的话你将不会有太多的系统资源或电池续航来供你使用,因为系统已经占用了很多。这就是我发现为什么 Ubuntu MATE是一个优秀的操作系统。因为它是轻量级的,但也有应有尽有的内容给你提供不错的体验。正是由于它轻量级的设计,大部分的系统资源可供你去完成繁重的工作。我认为它在低端硬件上是一个真正优秀的发行版。


如果你有闲置的旧笔记本或者台式机,可以使用 Lubuntu来令它焕发生机。Lubuntu 使用 LXDE 桌面环境,但该项目已经和 Razor Qt 合并为 LXQt 项目了。尽管最新的15.04版本仍然使用 LXDE ,但是以后的版本将使用 LXQt 。Lubuntu 确实是一款适合旧硬件的操作系统。

最好的物联网操作系统:Snappy Ubuntu Core

Snappy Ubuntu Core是最好的物联网以及其他类似设备的基于 Linux 的操作系统。该操作系统有很大的潜力将近乎的所有东西都变成智能设备,比如路由器、咖啡机、无人驾驶飞机等等。优秀的软件管理和为增强安全性设计的容器化将它变得更加好玩。

最好的台式机操作系统:Linux Mint Cinnamon

Linux Mint Cinnamon是最好的台式机操作系统,它对硬件强大的笔记本也是最好的。我将它当成 Linux 世界的 Mac OS X 。老实说,我曾经因为 Cinnamon的不稳定而十分不愉快。但是,只要开发者选择 LTS 版本,它就变得难以置信的稳定。因为开发者不必花太多时间去跟上 Ubuntu,所以他们可以花更多时间去让 Cinnamon 更好。

最好的游戏系统:Steam OS

游戏一直是桌面版 Linux 的弱点,许多用户启动双系统的 Windows只是为了玩游戏。Valve Software 正在努力改变这一现状。Valve 是一个提供使游戏在不同平台上运行的客户端的游戏分销商。而且,为了创建基于 Linux 的游戏框架,Valve 已经创建了他们自己的开放式操作系统—— Steam OS。在2015年底,合作伙伴开始将 Steam 机器推向市场。



而且,在这一方面没有其他的能打败 Tails。它是基于 Debian 的设计用来实现隐私保护和匿名化的操作系统。Tails 非常棒,据报道,美国国家安全局(NSA)认为它是自己使命的重要威胁。

最好的多媒体制作系统:Ubuntu Studio

多媒体制作是基于 Linux 的操作系统的主要缺点之一,所有专业级的程序在 Windows和 Mac OS X 上都可找到。Linux 上却没有像样的音频/视频制作软件,但一个多媒体制作系统需要的不仅仅是像样的应用程序。它应该使用轻量级的桌面环境使宝贵的系统资源如 CPU、RAM 被系统尽量少的使用,以便用于多媒体制作程序。因此,最好的 Linux 多媒体制作系统是 Ubuntu Studio,它使用 Xfce 桌面环境并配备了众多的音频,视频和图像编辑应用程序。Linux Story 网站很长时间也用过它来制作一些影音多媒体素材。


企业用户不会四处寻找运行在自己服务器上的发行版。他们已经知道选择范围:Red Hat Enterprise Linux 或者 SUSE Linux Enterprise 。这两个名字已经成为企业级系统的代名词。这些公司也在设法在容器化和软件定义上的创新来推倒当前的壁垒。Linux Story 认为 RHEL 确实稳定,确实好用。


如果你正打算运行一个服务器,但是又不想为 RHEL 或 SLE 的维护付费,那么 Debian 或 CentOS 是你最好的选择。这些发行版是社区主导的服务器版本,它们有着黄金标准。而且,它们的支持周期很长,所以你不必担心经常升级系统。

最好的移动操作系统:Plasma Mobile

尽管基于 Linux 的操作系统—— Android 正在主宰移动领域,包括我在内的很多开源社区的成员仍然希望有一个发行版能够在移动设备上提供传统的 Linux 桌面应用程序。同时,它最好是由一个社区负责运营维护而不是一个公司以便让用户仍然是受关注的焦点,而不是以公司的财务目标为焦点。而这正是 KDE 的 Plasma Mobile带来的希望。

该版本是基于 Kubuntu 的,发布于2015年。因为 KDE 社区在公众环境中遵守标准和发展东西是众所周知的,所以我对 Plasma Mobile的未来充满希望。

最好的 ARM 设备发行版:Arch Linux ARM

随着 Android 的成功,我们已经被 ARM 设备所包围——从树莓派到 Chromebook 再到 Nvidia Shield。为 Intel/AMD 处理器编写的传统发行版将不能在这些设备上运行。虽然一些发行版专为 ARM 设计,但是大多数都只针对具体的硬件,比如为树莓派设计的 Raspbian 。这也是为什么 Arch Linux ARM(ALARM) 让人眼前一亮。因为它是一个纯粹由社区主导的基于 Arch Linux 的发行版,你可以在树莓派、Chromebook、Android 设备、Nvidia Shield 等上面运行它。这个发行版更有趣的是,因为 Arch User Repository(AUR)的原因,所以你可以安装许多你可能在其他发行版上无法获得的应用程序。


当我完成这篇文章的时候我很惊讶和惊奇,非常令人兴奋的看到有适合每个人的 Linux 世界。如果这一年桌面版的 Linux 一直跳票也没关系,我们因 Linux 时刻高兴着!