Malignant gliomas, the most common subtype of primary brain tumors, are characterized by high proliferation, great invasion, and neurological destruction and considered to be the deadliest of human cancers. Analgesic-antitumor peptide (AGAP), one of scorpion toxic polypeptides, has been shown to have antitumor activity. Here, we show that recombinant AGAP (rAGAP) not only inhibits the proliferation of gliomas cell SHG-44 and rat glioma cell C6, but also suppresses the migration of SHG-44 cells during wound healing. To explain these phenomena, we find that rAGAP leads to cell cycle of SHG-44 arrested in G1 phase accompanied by suppressing G1 cell cycle regulatory proteins CDK2, CDK6, and p-RB by means of the down-regulated protein expression of p-AKT. Meanwhile, rAGAP significantly decreases the production of NF-κB, BCL-2, p-p38, p-c-Jun, and p-Erk1/2 and further suppresses the activation of VEGF and MMP-9 in SHG-44 cells. These findings suggest rAGAP inhibit proliferation and migration of SHG-44 cells by arresting cell cycle and interfering p-AKT, NF-κB, BCL-2, and MAPK signaling pathways. 相似文献
In hybrid clouds, there is a technique named cloud bursting which can allow companies to expand their capacity to meet the demands of peak workloads in a low-priced manner. In this work, a cost-aware job scheduling approach based on queueing theory in hybrid clouds is proposed. The job scheduling problem in the private cloud is modeled as a queueing model. A genetic algorithm is applied to achieve optimal queues for jobs to improve the utilization rate of the private cloud. Then, the task execution time is predicted by back propagation neural network. The max–min strategy is applied to schedule tasks according to the prediction results in hybrid clouds. Experiments show that our cost-aware job scheduling algorithm can reduce the average job waiting time and average job response time in the private cloud. In additional, our proposed job scheduling algorithm can improve the system throughput of the private cloud. It also can reduce the average task waiting time, average task response time and total costs in hybrid clouds. 相似文献
Vanadium‐based fluorophosphates are promising sodium‐ion battery cathode materials. Different phases of NaVPO4F and Na3V2(PO4)2F3 are reported in the literature. However, experiments in this work suggest that there could be confusions about the single‐phase NaVPO4F in solid‐state synthesis. Here, systematic investigation of the mechanism underlying structural and compositional evolution of solid‐state synthesis (NaF:VPO4 = 1:1) is determined by in situ and ex situ X‐ray diffraction and electrochemical measurements. Three reactions—3NaF + 3VPO4 → Na3V2(PO4)2F3 + VPO4 (up to 500 °C), Na3V2(PO4)2F3 + VPO4 → Na3V2(PO4)3 + VF3↑ (600–800 °C), and 2Na3V2(PO4)3 → 2(VO)2P2O7 + Na4P2O7 + amorphous products (above 800 °C)—are validated by in situ XRD and thermogravimetric analysis/differential scanning calorimetry. None of the products reported in this work is consistent with single‐phase NaVPO4F at any temperature. It is speculated that the assignments of I4/mmm and C2/c NaVPO4F from solid‐state synthesis are incorrect, which are instead multiphase mixtures of Le Meins' Na3V2(PO4)2F3, unreacted VPO4, and hexagonal Na3V2(PO4)3. Liquid‐electrolyte‐based electrochemical ion exchange of LiVPO4F produces a tavorite NaVPO4F structure, which is very different from Le Meins' family of Na3Al2(PO4)2F3 polymorphs. 相似文献
Improper data replacement and inappropriate selection of job scheduling policy are important reasons for the degradation of Spark system operation speed, which directly causes the performance degradation of Spark parallel computing. In this paper, we analyze the existing caching mechanism of Spark and find that there is still more room for optimization of the existing caching policy. For the task structure analysis, the key information of Spark tasks is taken out to obtain the data and memory usage during the task runtime, and based on this, an RDD weight calculation method is proposed, which integrates various factors affecting the RDD usage and establishes an RDD weight model. Based on this model, a minimum weight replacement algorithm based on RDD structure analyzing is proposed. The algorithm ensure that the relatively more valuable data in the data replacement process can be cached into memory. In addition, the default job scheduling algorithm of the Spark framework considers a single factor, which cannot form effective scheduling for jobs and causes a waste of cluster resources. In this paper, an adaptive job scheduling policy based on job classification is proposed to solve the above problem. The policy can classify job types and schedule resources more effectively for different types of jobs. The experimental results show that the proposed dynamic data replacement algorithm effectively improves Spark's memory utilization. The proposed job classification-based adaptive job scheduling algorithm effectively improves the system resource utilization and shortens the job completion time.