This study examined the effects of extraversion and neuroticism on participants' reported vividness of visual imagery and on their memory performance for concrete and abstract nouns. Groups of extraverts (n = 15) and introverts (n = 15) were selected from a larger original sample and asked to remember a series of concrete and abstract nouns, including a set of lexically ambiguous concrete homonyms (e.g., earth = 1. planet, 2. soil). Extraverts reported more vivid imagery than introverts but this did not translate into better recall for extraverts, even for concrete stimuli. Recall was best for unambiguous concrete nouns, followed by concrete homonyms, then abstract nouns. While initial analyses suggested that there was an interaction between extraversion and the type of word presented, later analyses revealed that neuroticism was the main driver in differences in recall between different word types. While differences in recall were best explained by context availability theory (Schwanenflugel, 1991) rather than dual coding theory (Paivio, 1991), questions remain about the power of either theory to explain the role of individual differences in personality on recall, particularly given that imagery vividness effects were related to extraversion while differences in recall were related to neuroticism. The implications of these findings for future research and theoretical development are discussed.
Objectives: To undertake a quantitative evaluation of a theory-based, interactive online decision aid (BresDex) to support women choosing surgery for early breast cancer (Stage I and II), based on observations of its use in practice.
Methods: Observational cohort study. Website log-files collected data on the use of BresDex. Online questionnaires assessed knowledge about breast cancer and treatment options, degree to which women were deliberating about their options, and surgery intentions, pre- and post-BresDex.
Results: Readiness to make a decision significantly increased after using BresDex (p<.001), although there was no significant improvement in knowledge. Participants that were 'less ready' to make a decision before using BresDex, spent a longer time using BresDex (p<.05). Significant associations between surgery intentions and choices were observed (p<.001), with the majority of participants going on to have BCS. Greater length of time spent on BresDex was associated with stronger intentions to have BCS (p<.05).
Conclusion: The use of BresDex appears to facilitate readiness to make a decision for surgery, helping to strengthen surgery intentions.
Practice implications: BresDex may prove a useful adjunct to the support provided by the clinical team for women facing surgery for early breast cancer. © 2012 Elsevier Ireland Ltd. All rights reserved.
Background: Better self management could improve quality of life (QoL) and reduce hospital admissions in chronic obstructive pulmonary disease (COPD), but the best way to promote it remains unclear. Aim: To explore the feasibility, effectiveness and cost effectiveness of a novel, layperson-led, theoretically driven COPD self-management support programme. Design and setting: Pilot randomised controlled trial in one UK primary care trust area. Method: Patients with moderate to severe COPD were identified through primary care and randomised 2:1 to the 7-week-long, group intervention or usual care. Outcomes at baseline, 2, and 6 months included self-reported health, St George’s Respiratory Questionnaire (SGRQ), EuroQol, and exercise. Results: Forty-four per cent responded to GP invitation, 116 were randomised: mean (standard deviation [SD]) age 69.5 (9.8) years, 46% male, 78% had unscheduled COPD care in the previous year. Forty per cent of intervention patients completed the course; 35% attended no sessions; and 78% participants completed the 6-month follow-up questionnaire. Results suggest that the intervention may increase both QoL (mean EQ-5D change 0.12 (95% confidence interval [CI] = –0.02 to 0.26) higher, intervention versus control) and exercise levels, but not SGRQ score. Economic analyses suggested that with thresholds of £20 000 per quality-adjusted life-year gained, the intervention is likely to be cost-effective. Conclusion: This intervention has good potential to meet the UK National Institute for Health and Clinical Excellence criteria for cost effectiveness, and further research is warranted. However, to make a substantial impact on COPD self-management, it will also be necessary to explore other ways to enable patients to access self-management education.
Background: Converging neuroimaging research suggests altered emotion neurocircuitry in individuals with posttraumatic stress disorder
(PTSD). Emotion activation studies in these individuals have shown hyperactivation in emotion-related regions, including the amygdala and
insula, and hypoactivation in emotion-regulation regions, including the medial prefrontal cortex (mPFC) and anterior cingulate cortex (ACC).
However, few studies have examined patterns of connectivity at rest in individuals with PTSD, a potentially powerful method for illuminating
brain network structure. Methods: Using the amygdala as a seed region, we measured resting-state brain connectivity using 3 T functional
magnetic resonance imaging in returning male veterans with PTSD and combat controls without PTSD. Results: Fifteen veterans with
PTSD and 14 combat controls enrolled in our study. Compared with controls, veterans with PTSD showed greater positive connectivity
between the amygdala and insula, reduced positive connectivity between the amygdala and hippocampus, and reduced anticorrelation
between the amygdala and dorsal ACC and rostral ACC. Limitations: Only male veterans with combat exposure were tested, thus our findings
cannot be generalized to women or to individuals with non–combat related PTSD. Conclusion: These results demonstrate that studies
of functional connectivity during resting state can discern aberrant patterns of coupling within emotion circuits and suggest a possible brain
basis for emotion-processing and emotion-regulation deficits in individuals with PTSD.
Objective—Convergent neuroimaging and neuropsychological research demonstrates disrupted
attention and heightened threat sensitivity in PTSD. This might be linked to aberrations in largescale
networks subserving detection of salient stimuli, i.e. the salience network (SN), and
stimulus-independent, internally-focused thought, i.e. the default mode network (DMN).
Methods—Resting state brain activity was measured in returning veterans who served in Iraq or
Afghanistan with (n=15) and without PTSD (n=15) and in healthy community controls (n=15).
Correlation coefficients were calculated between the time course of seed regions in key SN and
DMN regions (posterior cingulate, ventromedial prefrontal cortex, and bilateral anterior insula)
and all other voxels of the brain.
Results—Compared to control groups, PTSD participants showed reduced functional
connectivity within DMN (between DMN seeds and other DMN regions), including rostral ACC/
vmPFC (Z=3.31; p=.005, corrected) and hippocampus (Z=2.58; p=.005), and increased
connectivity within SN (between insula seeds and other SN regions), including amygdala (Z=3.03;
p=.01, corrected). PTSD participants also demonstrated increased cross-network connectivity.
DMN seeds exhibited elevated connectivity with SN regions, including insula (Z=3.06; p=.03,
corrected), putamen, and supplementary motor area (Z=4.14; Z=4.08; p<.001), and SN seeds
exhibited elevated connectivity with DMN regions, including hippocampus (Z=3.10; p=.048,
corrected).
Conclusions—During resting state scanning, PTSD participants showed reduced coupling
within DMN, greater coupling within SN, and increased coupling between DMN and SN. Our
findings suggest a relative dominance of threat-sensitive circuitry in PTSD, even in task-freeconditions. Disequilibrium between large-scale networks subserving salience detection versus
internally focused thought may be associated with PTSD pathophysiology
Between 20% and 40% of young children suffer a feverish illness each year and many of these will present to their general practitioner. Although the majority of these children have benign, self-limiting illness, infection remains the leading cause of death in children under the age of 5 years. Appropriate assessment, management and referral of the febrile child are important skills to acquire for doctors working in primary care. This article outlines the signs and symptoms of serious infective illness in children under 5 years of age and describes current National Institute for Health and Clinical Excellence (NICE) guidelines for feverish illness in children and the use of the ‘traffic-light’ risk score in the context of face-to-face and remote assessment.
Abstract
Objectives
To determine the relative contribution of general practices (GPs) to the diagnosis of chlamydia and gonorrhoea in England and whether treatment complied with national guidelines.
Design
Analysis of longitudinal electronic health records in the Clinical Practice Research Datalink (CPRD) and national sexually transmitted infection (STI) surveillance databases, England, 2000–2011.
Setting
GPs, and community and specialist STI services.
Participants
Patients diagnosed with chlamydia (n=1 386 169) and gonorrhoea (n=232 720) at CPRD GPs, and community and specialist STI Services from 2000–2011.
Main outcome measures
Numbers and rates of chlamydia and gonorrhoea diagnoses; percentages of patients diagnosed by GPs relative to other services; percentage of GP patients treated and antimicrobials used; percentage of GP patients referred.
Results
The diagnosis rate (95% CI) per 100 000 population of chlamydia in GP increased from 22.8 (22.4–23.2) in 2000 to 29.3 (28.8–29.7) in 2011 (p<0.001), while the proportion treated increased from 59.5% to 78.4% (p=0.001). Over 90% were prescribed a recommended antimicrobial. Over the same period, the diagnosis rate (95% CI) per 100 000 population of gonorrhoea in GP ranged between 3.2 (3–3.3) and 2.4 (2.2–2.5; p=0.607), and the proportion treated ranged between 32.7% and 53.6% (p=0.262). Despite being discontinued as a recommended therapy for gonorrhoea in 2005, ciprofloxacin accounted for 42% of prescriptions in 2007 and 20% in 2011. Over the study period, GPs diagnosed between 9% and 16% of chlamydia cases and between 6% and 9% of gonorrhoea cases in England.
Conclusions
GP makes an important contribution to the diagnosis and treatment of bacterial STIs in England. While most patients diagnosed with chlamydia were managed appropriately, many of those treated for gonorrhoea received antimicrobials no longer recommended for use. Given the global threat of antimicrobial resistance, GPs should remain abreast of national treatment guidelines and alert to treatment failure in their patients.
BACKGROUND/AIMS: War experiences can affect mental health, but large-scale studies on the long-term impact are rare. We aimed to assess long-term mental health consequences of war in both people who stayed in the conflict area and refugees.
METHOD: On average 8 years after the war in former Yugoslavia, participants were recruited by probabilistic sampling in 5 Balkan countries and by registers and networking in 3 Western European countries. General psychological symptoms were assessed on the Brief Symptom Inventory and posttraumatic stress symptoms on the Impact of Event Scale-Revised.
RESULTS: We assessed 3,313 interviewees in the Balkans and 854 refugees. Paranoid ideation and anxiety were the severest psychological symptoms in both samples. In multivariable regressions, older age, various specific war experiences and more traumatic experiences after the war were all associated with higher levels of both general psychological and posttraumatic stress symptoms in both samples. Additionally, a greater number of migration stressors and having only temporary legal status in the host country were associated with greater severity of symptoms in refugees.
CONCLUSIONS: Psychological symptoms remain high in war-affected populations many years after the war, and this is particularly evident for refugees. Traumatic war experiences still predict higher symptom levels even when the findings have been adjusted for the influence of other factors.
Background Prevalence rates of post-traumatic stress disorder (PTSD) following the experience of war have been shown to be high. However, little is known about the course of the disorder in people who remained in the area of conflict and in refugees. Method We studied a representative sample of 522 adults with war-related PTSD in five Balkan countries and 215 compatriot refugees in three Western European countries. They were assessed on average 8 years after the war and reinterviewed 1 year later. We established change in PTSD symptoms, measured on the Impact of Events Scale – Revised (IES-R), and factors associated with more or less favourable outcomes. Results During the 1-year period, symptoms decreased substantially in both Balkan residents and in refugees. The differences were significant for IES-R total scores and for the three subscales of intrusions, avoidance and hyperarousal. In multivariable regressions adjusting for the level of baseline symptoms, co-morbidity with depression predicted less favourable symptom change in Balkan residents. More pre-war traumatic events and the use of mental health services within the follow-up period were associated with less improvement in refugees. Conclusions Several years after the war, people with PTSD reported significant symptom improvement that might indicate a fluctuating course over time. Co-morbid depression may have to be targeted in the treatment of people who remained in the post-conflict regions whereas the use of mental health services seems to be linked to the persistence of symptoms among refugees.
Purpose: The aim was to assess whether experiences of war trauma remain directly associated with suicidality in war affected communities when other risk factors are considered.
Materials and methods: In the main sample 3313 participants from former Yugoslavia who experienced war trauma were recruited using a random sampling in five Balkan countries. In the second sample 854 refugees from former Yugoslavia recruited through registers and networking in three Western European countries. Sociodemographic and data on trauma exposure, psychiatric diagnoses and level of suicidality were assessed. Results: In the main sample 113 participants (3.4%) had high suicidality, which was associated with number of potentially traumatic war experiences (odds ratio 1.1) and war related imprisonment (odds ratio 3) once all measured risk factors were considered. These associations were confirmed in the refugee sample with a higher suicidality rate (10.2%).
Discussion and conclusions: Number of potentially traumatic war experiences, in particular imprisonment, may be considered as a relevant risk factor for suicidality in people affected by war.
Objective. To critically review the evidence on the efficacy and effectiveness of practitioner-based complementary therapies for patients with osteoarthritis. We excluded t’ai chi and acupuncture, which have been the subject of recent reviews. Methods. Randomized controlled trials, published in English up to May 2011, were identified using systematic searches of bibliographic databases and searching of reference lists. Information was extracted on outcomes, statistical significance in comparison with alternative treatments and reported side effects. The methodological quality of the identified studies was determined using the Jadad scoring system. Outcomes considered were pain and patient global assessment. Results. In all, 16 eligible trials were identified covering 12 therapies. Overall, there was no good evidence of the effectiveness of any of the therapies in relation to pain or global health improvement/quality of life because most therapies only had a single randomized controlled trial. Where positive results were reported, they were often comparing an active intervention with no intervention. Therapies with multiple trials either provided null (biofeedback) or inconsistent results (magnet therapy), or the trials available scored poorly for quality (chiropractic). There were few adverse events reported in the trials. Conclusion. There is not sufficient evidence to recommend any of the practitioner-based complementary therapies considered here for the management of OA, but neither is there sufficient evidence to conclude that they are not effective or efficacious.
Objective. To critically review the evidence on the effectiveness of complementary therapies for patients with RA.
Methods. Randomized controlled trials, published in English up to May 2011, were identified using systematic searches of bibliographic databases and searching of reference lists. Information was extracted on outcomes and statistical significance in comparison with alternative treatments and reported side effects. The methodological quality of the identified studies was determined using the Jadad scoring system. All outcomes were considered but with a focus on patient global assessment and pain reporting.
Results. Eleven eligible trials were identified covering seven therapies. Three trials that compared acupuncture with sham acupuncture reported no significant difference in pain reduction between the groups but one out of two reported an improvement in patient global assessment. Except for reduction in physician's global assessment of treatment and disease activity reported in one trial, no other comparative benefit of acupuncture was seen. There were two studies on meditation and one each on autogenic training, healing therapy, progressive muscle relaxation, static magnets and tai chi. None of these trials reported positive comparative effects on pain but some positive effects on patient global assessment were noted at individual time points in the healing therapy and magnet therapy studies. A small number of other outcomes showed comparative improvement in individual trials. There were no reports of major adverse events.
Conclusion. The very limited evidence available indicates that for none of the practitioner-based complementary therapies considered here is there good evidence of efficacy or effectiveness in the management of RA.
Background: Rheumatoid arthritis is a chronic inflammatory condition that affects the joints causing unpredictable episodes of pain, stiffness and disability. People with rheumatoid arthritis usually require lifelong specialist follow-up but frequently have periods when their disease can be managed through self-care or that provided by their general practitioner. Compared to the traditional clinician-driven care in rheumatoid arthritis, patient-initiated care has proven to be more beneficial in terms of reducing unnecessary medical reviews, providing greater satisfaction to patients and staffs and maintaining the patient’s physical and psychological status. We aim to evaluate the implementation of a patient-initiated review system in a routine secondary care rheumatology service in a public hospital in England, where patients get the opportunity to self-manage their disease by requesting specialist reviews at times of need instead of clinician-scheduled appointments. Methods/design: Three hundred and eighty patients attending routine review at Plymouth Hospitals NHS Trust will be randomised to either enrol immediately into a patient-initiated review system (direct access group), or to be seen regularly by a clinician at the hospital (regular clinician-initiated group). Patients (or their general practitioner) in the direct access group can arrange a review by calling a rheumatology nurse-led advice line that enables telephone delivered clinical advice, or where appropriate, an appointment with a rheumatologist within 10 working days. Patients in the regular clinician-initiated group will attend their planned appointments at regular intervals during the intervening period of 12 months. The primary outcome of interest is patient satisfaction; secondary outcomes include service use, waiting times and clinical measures. Semi-structured, in-depth interviews will be conducted with a subset of patients and staff with the aim of identifying facilitators/barriers in implementing patient-initiated clinics. Discussion: The implementation of a patient-initiated review system in routine care rheumatology will replace the fixed clinician-driven review system with a more flexible patient-driven system where patients usually self-manage their disease, but can request prompt help when required. We believe that this study will enable a comparison of the changes in local services and will be helpful in exploring the benefits/drawbacks of such implementation, thus providing lessons for implementation in other hospitals and for other chronic diseases.
This study aimed to determine demographic, behavioural and self-report disease/treatment variables among HIV-infected individuals (n = 666) that predict unprotected intercourse with a partner of unknown/discordant status. Sexual risk behaviour was reported by 12.8%. In multivariable analysis, risk was more likely to be reported by gay men compared to women or heterosexual men, and for those with higher psychological symptom burden. Psychological symptoms should be assessed and managed in the HIV outpatient setting to ensure integrated care that enhances prevention.
INTRODUCTION
HIV-positive patients receiving combination antiretroviral therapy (cART) frequently experience metabolic complications such as dyslipidemia and insulin resistance, as well as lipodystrophy, increasing the risk of cardiovascular disease (CVD) and diabetes mellitus (DM). Rates of DM and other glucose-associated disorders among HIV-positive patients have been reported to range between 2 and 14%, and in an ageing HIV-positive population, the prevalence of DM is expected to continue to increase. This study aims to develop a model to predict the short-term (six-month) risk of DM in HIV-positive populations and to compare the existing models developed in the general population.
METHODS
All patients recruited to the Data Collection on Adverse events of Anti-HIV Drugs (D:A:D) study with follow-up data, without prior DM, myocardial infarction or other CVD events and with a complete DM risk factor profile were included. Conventional risk factors identified in the general population as well as key HIV-related factors were assessed using Poisson-regression methods. Expected probabilities of DM events were also determined based on the Framingham Offspring Study DM equation. The D:A:D and Framingham equations were then assessed using an internal-external validation process; area under the receiver operating characteristic (AUROC) curve and predicted DM events were determined.
RESULTS
Of 33,308 patients, 16,632 (50%) patients were included, with 376 cases of new onset DM during 89,469 person-years (PY). Factors predictive of DM included higher glucose, body mass index (BMI) and triglyceride levels, and older age. Among HIV-related factors, recent CD4 counts of <200 cells/µL and lipodystrophy were predictive of new onset DM. The mean performance of the D:A:D and Framingham equations yielded AUROC of 0.894 (95% CI: 0.849, 0.940) and 0.877 (95% CI: 0.823, 0.932), respectively. The Framingham equation over-predicted DM events compared to D:A:D for lower glucose and lower triglycerides, and for BMI levels below 25 kg/m(2).
CONCLUSIONS
The D:A:D equation performed well in predicting the short-term onset of DM in the validation dataset and for specific subgroups provided better estimates of DM risk than the Framingham.
BACKGROUND
The Study of Aldesleukin with and without Antiretroviral Therapy (STALWART) was designed to evaluate whether intermittent IL-2 alone or with peri-cycle ART increased CD4+ cell counts (and so delayed initiation of ART) in HIV infected individuals having ≥ 300 CD4+ cells/mm(3) compared to untreated controls. When the results of two large clinical trials, ESPRIT and SILCAAT, showed no clinical benefit from IL-2 therapy, IL-2 administration was halted in STALWART. Because IL-2 recipients in STALWART experienced a greater number of opportunistic disease (OD) or death and adverse events (AEs), participants were asked to consent to an extended follow-up phase in order to assess persistence of IL-2 effects.
METHODOLOGY
Participants in this study were followed for clinical events and AEs every 4 months for 24 months. Unadjusted Cox proportional hazards models were used to summarize death, death or first OD event, and first grade 3 or 4 AE.
PRINCIPAL FINDINGS
A total of 267 persons were enrolled in STALWART (176 randomized to the IL-2 arms and 91 to the no therapy arm); 142 individuals in the IL-2 group and 80 controls agreed to enter the extended follow-up study. Initiation of continuous ART was delayed in the IL-2 groups, but once started, resulted in similar CD4+ cell and viral load responses compared to controls. The hazard ratios (95% CI) for IL-2 versus control during the extension phase for death or OD, grade 3 or 4 AE, and grade 4 AE were 1.45 (0.38, 5.45), 0.43 (0.24, 1.63) and 0.20 (0.04, 1.03), respectively. The hazard ratios for the AE outcomes were significantly lower during the extension than during the main study.
CONCLUSIONS
Adverse events associated with IL-2 cycling did not persist upon discontinuation of IL-2. The use of IL-2 did not impact the subsequent response to initiation of cART.
INTRODUCTION
It is unknown whether HIV treatment guidelines, based on resource-rich country cohorts, are applicable to African populations.
METHODS
We estimated CD4 cell loss in ART-naïve, AIDS-free individuals using mixed models allowing for random intercept and slope, and time from seroconversion to clinical AIDS, death and antiretroviral therapy (ART) initiation by survival methods. Using CASCADE data from 20 European and 3 sub-Saharan African (SSA) cohorts of heterosexually-infected individuals, aged ≥15 years, infected ≥2000, we compared estimates between non-African Europeans, Africans in Europe, and Africans in SSA.
RESULTS
Of 1,959 (913 non-Africans, 302 Europeans-African origin, 744 SSA), two-thirds were female; median age at seroconversion was 31 years. Individuals in SSA progressed faster to clinical AIDS but not to death or non-TB AIDS. They also initiated ART later than Europeans and at lower CD4 cell counts. In adjusted models, Africans (especially from Europe) had lower CD4 counts at seroconversion and slower CD4 decline than non-African Europeans. Median (95% CI) CD4 count at seroconversion for a 15-29 year old woman was 607 (588-627) (non-African European), 469 (442-497) (European-African origin) and 570 (551-589) (SSA) cells/µL with respective CD4 decline during the first 4 years of 259 (228-289), 155 (110-200), and 199 (174-224) cells/µL (p<0.01).
DISCUSSION
Despite differences in CD4 cell count evolution, death and non-TB AIDS rates were similar across study groups. It is therefore prudent to apply current ART guidelines from resource-rich countries to African populations.
BACKGROUND
Hepatitis B virus (HBV) infection is an increasingly important cause of morbidity and mortality in HIV-infected adults. This study aimed to determine the prevalence and incidence of HBV in the UK CHIC Study, a multicentre observational cohort.
METHODS AND FINDINGS
12 HIV treatment centres were included. Of 37,331 patients, 27,450 had at least one test (HBsAg, anti-HBs or anti-HBc) result post-1996 available. 16,043 were white, 8,130 black and 3,277 other ethnicity. Route of exposure was homosexual sex 15,223 males, heterosexual sex 3,258 males and 5,384 females, injecting drug use 862 and other 2,723. The main outcome measures used were the cumulative prevalence and the incidence of HBV coinfection. HBV susceptible patients were followed up until HBsAg and/or anti-HBc seroconversion incident infection, evidence of vaccination or last visit. Poisson regression was used to determine associated factors. 25,973 had at least one HBsAg test result. Participants with HBsAg results were typically MSM (57%) and white (59%) (similar to the cohort as a whole). The cumulative prevalence of detectable HBsAg was 6.9% (6.6 to 7.2%). Among the 3,379 initially HBV-susceptible patients, the incidence of HBV infection was 1.7 (1.5 to 1.9)/100 person-years. Factors associated with incident infection were older age and IDU. The main limitation of the study was that 30% of participants did not have any HBsAg results available. However baseline characteristics of those with results did not differ from those of the whole cohort. Efforts are on-going to improve data collection.
CONCLUSIONS
The prevalence of HBV in UK CHIC is in line with estimates from other studies and low by international standards. Incident infection continued to occur even after entry to the cohort, emphasising the need to ensure early vaccination.
INTRODUCTION
Effective treatment of HIV-associated distal sensory polyneuropathy remains a significant unmet therapeutic need.
METHODS
In this randomized, double-blind, controlled study, patients with pain due to HIV-associated distal sensory polyneuropathy received a single 30-minute or 60-minute application of NGX-4010--a capsaicin 8% patch (n = 332)--or a low-dose capsaicin (0.04%) control patch (n = 162). The primary endpoint was the mean percent change from baseline in Numeric Pain Rating Scale score to weeks 2-12. Secondary endpoints included patient global impression of change at week 12.
RESULTS
Pain reduction was not significantly different between the total NGX-4010 group (-29.5%) and the total control group (-24.5%; P = 0.097). Greater pain reduction in the 60-minute (-30.0%) versus the 30-minute control group (-19.1%) prevented intended pooling of the control groups to test individual NGX-4010 treatment groups. No significant pain reduction was observed for the 30-minute NGX-4010 group compared with 30-minute control (-26.2% vs.-19.1%, respectively, P = 0.103). Pain reductions in the 60-minute NGX-4010 and control groups were comparable (-32.8% vs. -30.0%, respectively; P = 0.488). Posthoc nonparametric testing demonstrated significant differences favoring the total (P = 0.044) and 30-minute NGX-4010 groups (P = 0.035). Significantly, more patients in the total and 30-minute NGX-4010 group felt improved on the patient global impression of change versus control (67% vs. 55%, P = 0.011 and 65% vs. 45%, P = 0.006, respectively). Mild to moderate transient application site pain and erythema were the most common adverse events.
CONCLUSIONS
Although the primary endpoint analyses were not significant, trends toward pain improvement were observed after a single 30-minute NGX-4010 treatment.
BACKGROUND
Despite the known substantial benefits of combination antiretroviral therapy (cART), cumulative adverse effects could still limit the overall long-term treatment benefit. Therefore we investigated changes in the rate of death with increasing exposure to cART.
METHODS
A total of 12 069 patients were followed from baseline, which was defined as the time of starting cART or enrolment into EuroSIDA whichever occurred later, until death or 6 months after last follow-up visit. Incidence rates of death were calculated per 1000 person-years of follow-up (PYFU) and stratified by time of exposure to cART (≥3 antiretrovirals): less than 2, 2-3.99, 4-5.99, 6-7.99 and more than 8 years. Duration of cART exposure was the cumulative time actually receiving cART. Poisson regression models were fitted for each cause of death separately.
RESULTS
A total of 1297 patients died during 70,613 PYFU [incidence rate 18.3 per 1000 PYFU, 95% confidence interval (CI) 17.4-19.4], 413 due to AIDS (5.85, 95% CI 5.28-6.41) and 884 due to non-AIDS-related cause (12.5, 95% CI 11.7-13.3). After adjustment for confounding variables, including baseline CD4 cell count and HIV RNA, there was a significant decrease in the rate of all-cause and AIDS-related death between 2 and 3.99 years and longer exposure time. In the first 2 years on cART the risk of non-AIDS death was significantly lower, but no significant difference in the rate of non-AIDS-related deaths between 2 and 3.99 years and longer exposure to cART was observed.
CONCLUSION
In conclusion, we found no evidence of an increased risk of both all-cause and non-AIDS-related deaths with long-term cumulative cART exposure.
INTRODUCTION
Though patients with HIV now have near normal life expectancies as a result of antiretroviral treatment, long-term adverse effects are of growing concern. Using time-updated laboratory measurements, we use several methods to derive a score that can be used to identify individuals at high risk of mortality.
METHODS
Patients who started highly active antiretroviral therapy after 2000 and had ≥1 CD4 count, viral load, and laboratory marker recorded after the date of starting highly active antiretroviral therapy were included in the analyses. Laboratory markers were stratified into quintiles and associations between each marker and mortality was assessed using Poisson regression. The estimates of the final model were used to construct a score for predicting short-term mortality. Several methods, including multiple imputation, were used for analyzing records with missing measurements.
RESULTS
Of the 7232 patients included in this analysis, 247 died over 24,796 person-years of follow-up, giving an overall mortality rate of 1.00 (95% confidence interval: 0.87 to 1.12) per 100 person-years. Regardless of which method was used to deal with missing data, albumin, alkaline phosphatase, and hemoglobin were independently associated with mortality. Alanine transaminase was independently associated with mortality when patients with missing measurements were assumed to have measurements within the normal range. The C-statistics for all models ranged from 0.76 to 0.78.
CONCLUSION
Measures of alanine transaminase, albumin, alkaline phosphatase, and hemoglobin in the normal range were predictive of mortality, and hence we suggest using a scoring system to predict mortality which relies on the raw values of these 4 laboratory markers.
INTRODUCTION
The CD4 count and CD4 percentage (CD4%) are both strong predictors of clinical disease progression in human immunodeficiency virus (HIV). Although individuals may show discordancy between their CD4 count and CD4%, the clinical relevance of this is unclear.
METHODS
Discordancy was defined where the CD4% was ≤10th percentile for a selected CD4 count range (referred to as low discordancy), within the central 80% range (concordant), or ≥90th percentile (high discordancy). Regression methods identified factors associated with low and high discordancy in untreated individuals and assessed the impact of discordancy on treatment responses to highly active antiretroviral therapy (HAART).
RESULTS
High discordancy was associated with female sex, low viral load, and white ethnicity; low discordancy was associated with black or nonwhite ethnicity, older age, and injection drug use. Clinical event rates were higher in individuals with high discordancy starting HAART, but there was no association with subsequent HIV progression by 6 months after starting HAART. CD4 count increases remained lower, by 20 cells/mm(3), in individuals with low discordancy, and higher, by 27 cells/mm(3), in those with high discordancy.
CONCLUSIONS
Overall discrepancies between the CD4/CD4% are small, confirming the use of absolute CD4 counts as a monitoring tool.
OBJECTIVES
We designed two different studies to evaluate two different combination antiretroviral therapy (cART) stopping strategies namely a 'staggered stop' approach (STOP 1 study) and a 'protected stop' approach (STOP 2 study) to find the best 'universal stop' strategy.
PATIENTS AND METHODS
Patients who stopped cART for any reason were recruited. In STOP 1, 10 patients on efavirenz continued dual nucleos(t)ide reverse transcriptase inhibitors (NRTIs) for 1 week after discontinuing efavirenz. Efavirenz concentrations were measured weekly for up to 3 weeks. In STOP 2, 20 patients stopped their cART and replaced it with two tablets of lopinavir/ritonavir (Kaletra) (100/50 mg) twice daily for 4 weeks. Lopinavir, efavirenz, nevirapine and tenofovir concentrations were measured weekly for up to 4 weeks. Virological and resistance testing were performed.
RESULTS
In STOP 1 five patients still had efavirenz present (median t(1/2)=148.4 h) 3 weeks after stopping. In STOP 2, 15/20 patients had a viral load (VL) of <40 copies/mL and 3/20 patients had a reduction in VL by 4 weeks. Six patients opted not to stop lopinavir/ritonavir and still had <40 copies/mL at week 8. Week 1-4 median trough lopinavir concentrations were well above the EC(95). Six patients still had detectable concentrations of original cART persisting for >1 week after stopping. No patients developed new resistance mutations.
CONCLUSIONS
Plasma efavirenz concentrations can persist up to 3 weeks after patients stop efavirenz-containing regimens. This suggests a strategy of stopping efavirenz only 1 week before NRTIs may not be long enough for some individuals. The use of lopinavir/ritonavir monotherapy for a 4 week period may be an alternative pharmacologically and virologically effective universal stopping strategy which warrants further investigation.
OBJECTIVES
Current British HIV Association (BHIVA) guidelines recommend that all patients with a CD4 count <350 cells/μL are offered highly active antiretroviral therapy (HAART). We identified risk factors for delayed initiation of HAART following a CD4 count <350 cells/μL.
METHODS
All adults under follow-up in 2008 who had a first confirmed CD4 count <350 cells/μL from 2004 to 2008, who had not initiated treatment and who had >6 months of follow-up were included in the study. Characteristics at the time of the low CD4 cell count and over follow-up were compared to identify factors associated with delayed HAART uptake. Analyses used proportional hazards regression with fixed (sex/risk group, age, ethnicity, AIDS, baseline CD4 cell count and calendar year) and time-updated (frequency of CD4 cell count measurement, proportion of CD4 counts <350 cells/μL, latest CD4 cell count, CD4 percentage and viral load) covariates.
RESULTS
Of 4871 patients with a confirmed low CD4 cell count, 436 (8.9%) remained untreated. In multivariable analyses, those starting HAART were older [adjusted relative hazard (aRH)/10 years 1.15], were more likely to be female heterosexual (aRH 1.13), were more likely to have had AIDS (aRH 1.14), had a greater number of CD4 measurements < 350 cells/μL (aRH/additional count 1.18), had a lower CD4 count over follow-up (aRH/50 cells/μL higher 0.57), had a lower CD4 percentage (aRH/5% higher 0.90) and had a higher viral load (aRH/log(10) HIV-1 RNA copies/ml higher 1.06). Injecting drug users (aRH 0.53), women infected with HIV via nonsexual or injecting drug use routes (aRH 0.75) and those of unknown ethnicity (aRH 0.69) were less likely to commence HAART.
CONCLUSION
A substantial minority of patients with a CD4 count < 350 cells/μL remain untreated despite its indication.
Although initiatives are under way in the UK to diagnose HIV infection early, late presentation is still a major issue and often results in serious health complications for the individual and has implications for society, including high costs and increased rates of transmission. Intervention strategies in the UK have aimed at increasing testing opportunities but still a significant proportion of those with HIV infection either decline testing or continue to test late. The main objective of this study is to identify ideas and themes as to why testing was not carried out earlier in men who have sex with men (MSM) who presented with late HIV infection. Semi-structured interviews were carried out with MSM presenting late with a CD4 cell count of <200. A structured framework approach was used to analyse the data collected and generate ideas as to why they did not seek testing earlier. Seventeen MSM were interviewed and four main themes were identified: psychological barriers, including fear of illness and dying, stigma surrounding testing for HIV and in living with a positive diagnosis, perceived low risk for contracting HIV despite participants reporting having a good understanding of HIV and its transmission and strong views that a more active approach by healthcare services, including general practice, is necessary if the uptake of HIV testing is to increase. Late presentation with HIV infection continues to be a problem in the UK despite government initiatives to expand opportunities for testing. Recurring themes for late testing were a low perceived risk for HIV infection and a fear of HIV and a positive diagnosis. Population-targeted health promotion alongside a more proactive approach by healthcare professionals and making HIV testing more convenient and accessible may result in earlier testing.
BACKGROUND
We examined differences by geographical origin (GO) in time from HIV seroconversion (SC) to AIDS, death, and initiation of antiretroviral therapy (cART).
METHODS
Data from HIV seroconverter cohorts in Europe, Australia and Canada (CASCADE) was used; GO was classified as: western countries (WE), North Africa and Middle East (NAME), sub-Saharan Africa (SSA), Latin America (LA), and Asia (ASIA). Differences by GO were assessed using Cox models. Administrative censoring date was 30 June 2008.
RESULTS
Of 16 941 seroconverters, 15 548 were from WE, 158 NAME, 762 SSA, 349 LA, and 124 ASIA. We found no differences by GO in risks of AIDS (P = .99) and death (P = .12), although seroconverters from NAME (adjusted hazard ratio [aHR]: 0.57; 95% CI: 0.33-.94) and SSA (aHR: 0.74; 95% CI: 0.50-1.10) appeared to have lower mortality than WE. Chances of initiating cART differed by GO (P < .001): seroconverters from SSA were more likely to initiate cART than WE (aHR: 1.48; 95% CI: 1.26-1.74), but not after adjustment for CD4 at SC (aHR: 1.11; 95% CI: 0.88-1.40).
CONCLUSIONS
In settings with universal access to healthcare, GO does not play a major role in HIV disease progression.
OBJECTIVE
We recently showed that a urine albumin/total protein ratio (uAPR) <0.4 identifies tubular pathology in proteinuric patients. In tubular disorders, proteinuria is usually of low molecular weight and contains relatively little albumin. We tested the hypothesis that uAPR is useful in identifying tubular pathology related to antiretroviral use in HIV-infected patients.
METHODS
We retrospectively identified urine protein/creatinine ratios (uPCRs) in HIV-infected patients. A subset of samples had uPCR and urine albumin/creatinie ratio (uACR) measured simultaneously. We classified proteinuric patients (uPCR >30 mg/mmol) into two groups: those with predominantly 'tubular' proteinuria (TP) (uAPR <0.4) and those with predominantly 'glomerular' proteinuria (GP) (uAPR ≥ 0.4).
RESULTS
A total of 618 of 5244 samples from 1378 patients had uPCR ≥ 30 mg/mmol. uAPRs were available in 144 patients: 46 patients (32%) had TP and 21 (15%) GP; the remainder had uPCR <30 mg/mmol. The TP group had a higher fractional excretion of phosphate compared with the GP group (mean 27% vs. 16%, respectively; P<0.01). Patients with TP were more likely to be on tenofovir and/or a boosted protease inhibitor compared with those with GP. In 18 patients with heavy proteinuria (uPCR >100 mg/mmol), a renal assessment was made; eight had a kidney biopsy. In all cases, the uAPR results correlated with the nephrological diagnosis.
CONCLUSIONS
In HIV-infected patients, measuring uAPR may help to identify patients in whom a renal biopsy is indicated, and those in whom tubular dysfunction might be an important cause of proteinuria and which may be related to antiretroviral toxicity. We suggest that this would be useful as a routine screening procedure in patients with proteinuria.
BACKGROUND
Most adults infected with HIV achieve viral suppression within a year of starting combination antiretroviral therapy (cART). It is important to understand the risk of AIDS events or death for patients with a suppressed viral load.
METHODS AND FINDINGS
Using data from the Collaboration of Observational HIV Epidemiological Research Europe (2010 merger), we assessed the risk of a new AIDS-defining event or death in successfully treated patients. We accumulated episodes of viral suppression for each patient while on cART, each episode beginning with the second of two consecutive plasma viral load measurements <50 copies/µl and ending with either a measurement >500 copies/µl, the first of two consecutive measurements between 50-500 copies/µl, cART interruption or administrative censoring. We used stratified multivariate Cox models to estimate the association between time updated CD4 cell count and a new AIDS event or death or death alone. 75,336 patients contributed 104,265 suppression episodes and were suppressed while on cART for a median 2.7 years. The mortality rate was 4.8 per 1,000 years of viral suppression. A higher CD4 cell count was always associated with a reduced risk of a new AIDS event or death; with a hazard ratio per 100 cells/µl (95% CI) of: 0.35 (0.30-0.40) for counts <200 cells/µl, 0.81 (0.71-0.92) for counts 200 to <350 cells/µl, 0.74 (0.66-0.83) for counts 350 to <500 cells/µl, and 0.96 (0.92-0.99) for counts ≥500 cells/µl. A higher CD4 cell count became even more beneficial over time for patients with CD4 cell counts <200 cells/µl.
CONCLUSIONS
Despite the low mortality rate, the risk of a new AIDS event or death follows a CD4 cell count gradient in patients with viral suppression. A higher CD4 cell count was associated with the greatest benefit for patients with a CD4 cell count <200 cells/µl but still some slight benefit for those with a CD4 cell count ≥500 cells/µl.
Patient self-reported outcomes are increasingly important in measuring disease, treatment and care outcomes. It is unclear what constitutes well-being using a combined biomedical and psychosocial approach for patients with antiretroviral therapy (ART) access. This study aimed to determine the variance within the visual analogue scale (VAS) measure of health status using the existing five dimensions of the EuroQOL-5D, to identify which domains have the greatest effect on self-reported health status and to identify associations with the VAS using both biomedical and psychosocial factors among HIV outpatients. Consecutive patients in five UK clinics were recruited to a cross-sectional survey, n=778 (86% response rate). Patients self-completed validated measures, with treatment variables extracted from file. On the EuroQOL-5D, nearly one-third (28.1%) had mobility problems, one-fifth (18.7%) self-care problems, one-third (37.4%) difficulty in performing usual tasks and one-half (44.4%) reported pain/discomfort. In the regression model to determine associations with self-reported health status (VAS score), neither CD4 count nor ART status was associated with the outcome. However, in addition to four dimensions of the EuroQOL-5D, poorer health status was associated with worse physical symptom burden, treatment optimism and psychological symptoms. There is a relatively high prevalence of psychological morbidity and poor physical function, and these burdens of disease are associated with worse self-reported health status. As HIV management focuses on treatment for extended survival and a chronic model of disease, clinical attention to physical and psychological dimensions of patient care are essential to achieve optimal well-being.
OBJECTIVES
The efficacy and hepatic safety of the non-nucleoside reverse transcriptase inhibitors rilpivirine (TMC278) and efavirenz were compared in treatment-naive, HIV-infected adults with concurrent hepatitis B virus (HBV) and/or hepatitis C virus (HCV) infection in the pooled week 48 analysis of the Phase III, double-blind, randomized ECHO (NCT00540449) and THRIVE (NCT00543725) trials.
METHODS
Patients received 25 mg of rilpivirine once daily or 600 mg of efavirenz once daily, plus two nucleoside/nucleotide reverse transcriptase inhibitors. At screening, patients had alanine aminotransferase/aspartate aminotransferase levels ≤5× the upper limit of normal. HBV and HCV status was determined at baseline by HBV surface antigen, HCV antibody and HCV RNA testing.
RESULTS
HBV/HCV coinfection status was known for 670 patients in the rilpivirine group and 665 in the efavirenz group. At baseline, 49 rilpivirine and 63 efavirenz patients [112/1335 (8.4%)] were coinfected with either HBV [55/1357 (4.1%)] or HCV [57/1333 (4.3%)]. The safety analysis included all available data, including beyond week 48. Eight patients seroconverted during the study (rilpivirine: five; efavirenz: three). A higher proportion of patients achieved viral load <50 copies/mL (intent to treat, time to loss of virological response) in the subgroup without HBV/HCV coinfection (rilpivirine: 85.0%; efavirenz: 82.6%) than in the coinfected subgroup (rilpivirine: 73.5%; efavirenz: 79.4%) (rilpivirine, P = 0.04 and efavirenz, P = 0.49, Fisher's exact test). The incidence of hepatic adverse events (AEs) was low in both groups in the overall population (rilpivirine: 5.5% versus efavirenz: 6.6%) and was higher in HBV/HCV-coinfected patients than in those not coinfected (26.7% versus 4.1%, respectively).
CONCLUSIONS
Hepatic AEs were more common and response rates lower in HBV/HCV-coinfected patients treated with rilpivirine or efavirenz than in those who were not coinfected.
Gender is important in the experience of illness generally and HIV specifically. In this study the authors compare 183 HIV positive women with 76 HIV positive heterosexual men attending United Kingdom HIV clinics on clinical, treatment, and mental health factors. Participants completed a questionnaire on mental health and HIV-related factors. Laboratory measures of HIV viral load and CD4 cell count were obtained at baseline and 6-18 months later. After adjusting for age, employment, and treatment status, men were significantly less likely than women to suffer from high psychological [adjusted odds ratio (OR) = 0.38, 95% confidence interval (CI): 0.17, 0.86] and global symptom distress (adjusted OR = 0.42, 95% CI: 0.19, 0.92). However, men were more likely than women to report having suicidal thoughts (adjusted OR = 1.85, 95% CI: 0.95, 3.58). Relational, sexual behavior, and quality of life factors were similar for men and women. Adherence levels did not differ by gender but were sub-optimal in 56% of patients. Men had significantly lower CD4 counts than women at baseline, but not at follow-up. No differences were observed in the proportions with viral suppression. The groups had generally similar HIV experiences with high psychological distress. Adherence monitoring and gender appropriate psychological support are needed for these groups.
BACKGROUND
The lower tuberculosis incidence reported in human immunodeficiency virus (HIV)-positive individuals receiving combined antiretroviral therapy (cART) is difficult to interpret causally. Furthermore, the role of unmasking immune reconstitution inflammatory syndrome (IRIS) is unclear. We aim to estimate the effect of cART on tuberculosis incidence in HIV-positive individuals in high-income countries.
METHODS
The HIV-CAUSAL Collaboration consisted of 12 cohorts from the United States and Europe of HIV-positive, ART-naive, AIDS-free individuals aged ≥18 years with baseline CD4 cell count and HIV RNA levels followed up from 1996 through 2007. We estimated hazard ratios (HRs) for cART versus no cART, adjusted for time-varying CD4 cell count and HIV RNA level via inverse probability weighting.
RESULTS
Of 65 121 individuals, 712 developed tuberculosis over 28 months of median follow-up (incidence, 3.0 cases per 1000 person-years). The HR for tuberculosis for cART versus no cART was 0.56 (95% confidence interval [CI], 0.44-0.72) overall, 1.04 (95% CI, 0.64-1.68) for individuals aged >50 years, and 1.46 (95% CI, 0.70-3.04) for people with a CD4 cell count of <50 cells/μL. Compared with people who had not started cART, HRs differed by time since cART initiation: 1.36 (95% CI, 0.98-1.89) for initiation <3 months ago and 0.44 (95% CI, 0.34-0.58) for initiation ≥3 months ago. Compared with people who had not initiated cART, HRs <3 months after cART initiation were 0.67 (95% CI, 0.38-1.18), 1.51 (95% CI, 0.98-2.31), and 3.20 (95% CI, 1.34-7.60) for people <35, 35-50, and >50 years old, respectively, and 2.30 (95% CI, 1.03-5.14) for people with a CD4 cell count of <50 cells/μL.
CONCLUSIONS
Tuberculosis incidence decreased after cART initiation but not among people >50 years old or with CD4 cell counts of <50 cells/μL. Despite an overall decrease in tuberculosis incidence, the increased rate during 3 months of ART suggests unmasking IRIS.
BACKGROUND
Chronic kidney disease (CKD) is associated with increased all-cause mortality and kidney disease progression. Decreased kidney function at baseline may identify human immunodeficiency virus (HIV)-positive patients at increased risk of death and kidney disease progression.
STUDY DESIGN
Observational cohort study.
SETTING & PARTICIPANTS
7 large HIV cohorts in the United Kingdom with kidney function data available for 20,132 patients.
PREDICTOR
Baseline estimated glomerular filtration rate (eGFR).
OUTCOMES
Death and progression to stages 4-5 CKD (eGFR <30 mL/min/1.73 m(2) for >3 months) in Cox proportional hazards and competing-risk regression models.
RESULTS
Median age at baseline was 34 (25th-75th percentile, 30-40) years, median CD4 cell count was 350 (25th-75th percentile, 208-520) cells/μL, and median eGFR was 100 (25th-75th percentile, 87-112) mL/min/1.73 m(2). Patients were followed up for a median of 5.3 (25th-75th percentile, 2.0-8.9) years, during which 1,820 died and 56 progressed to stages 4-5 CKD. A U-shaped relationship between baseline eGFR and mortality was observed. After adjustment for potential confounders, eGFRs <45 and >105 mL/min/1.73 m(2) remained associated significantly with increased risk of death. Baseline eGFR <90 mL/min/1.73 m(2) was associated with increased risk of kidney disease progression, with the highest incidence rates of stages 4-5 CKD (>3 events/100 person-years) observed in black patients with eGFR of 30-59 mL/min/1.73 m(2) and those of white/other ethnicity with eGFR of 30-44 mL/min/1.73 m(2).
LIMITATIONS
The relatively small numbers of patients with decreased eGFR at baseline and low rates of progression to stages 4-5 CKD and lack of data for diabetes, hypertension, and proteinuria.
CONCLUSIONS
Although stages 4-5 CKD were uncommon in this cohort, baseline eGFR allowed the identification of patients at increased risk of death and at greatest risk of kidney disease progression.
BACKGROUND:
Extra-skeletal calcification and disordered phosphate metabolism are hallmarks of chronic kidney disease-mineral bone disorder (CKD-MBD). Osteoprotegerin (OPG) and fibroblast growth factor 23 (FGF-23) are increased in chronic kidney disease (CKD) and have been associated with arterial and cardiac dysfunction and reduced survival. Troponin T (cTnT) is released from cardiac myocytes under conditions of stress and is predictive of mortality across a range of renal functions. However, the utility of this biomarker was formerly limited by the lower limit of assay detection. The introduction of a high-sensitivity assay has enabled more detailed study of myocyte stress below the previous limit of detection. We studied the association of mediators of CKD-MBD with arterial stiffness and also of these mediators and arterial stiffness with myocardial damage in patients with CKD stages 3-4.
METHODS:
OPG and FGF-23 were measured in 200 CKD stages 3-4 patients. cTnT was measured using a high-sensitivity assay. Aortic stiffness was assessed using aortic pulse wave velocity (APWV).
RESULTS:
Mean age was 69 ± 11 years, mean systolic and diastolic blood pressure was 151 ± 22/81 ± 11 mmHg and renal function was 33 ± 11 mL/min/1.73 m(2). OPG, FGF-23, high-sensitivity troponin T (hs-cTnT) and APWV all correlated with renal function. After multivariate analysis, OPG and age remained independently associated with aortic stiffness. OPG and FGF-23 were independently associated with hs-cTnT in addition to other non-traditional risk factors (Model R(2) = 0.596).
CONCLUSION:
We have shown that changes in bone mediators and phosphate metabolism induced by CKD are independently associated with vascular and cardiomyocyte dysfunction. Our findings suggest that cardiac dysfunction may be specifically associated with such abnormalities in addition to recognized increases in vascular stiffness.
BACKGROUND
The HIV integrase strand transfer inhibitor elvitegravir (EVG) has been co-formulated with the CYP3A4 inhibitor cobicistat (COBI), emtricitabine (FTC), and tenofovir disoproxil fumarate (TDF) into a once-daily, single tablet. We compared EVG/COBI/FTC/TDF with a ritonavir-boosted (RTV) protease inhibitor regimen of atazanavir (ATV)/RTV+FTC/TDF as initial therapy for HIV-1 infection.
METHODS
This phase 3, non-inferiority study enrolled treatment-naive patients with an HIV-1 RNA concentration of 5000 copies per mL or more and susceptibility to atazanavir, emtricitabine, and tenofovir. Patients were randomly assigned (1:1) to receive EVG/COBI/FTC/TDF or ATV/RTV+FTC/TDF plus matching placebos, administered once daily. Randomisation was by a computer-generated random sequence, accessed via an interactive telephone and web response system. Patients, and investigators and study staff who gave treatments, assessed outcomes, or analysed data were masked to the assignment. The primary endpoint was HIV RNA concentration of 50 copies per mL or less after 48 weeks (according to the US FDA snapshot algorithm), with a 12% non-inferiority margin. This trial is registered with ClinicalTrials.gov, number NCT01106586.
FINDINGS
1017 patients were screened, 715 were enrolled, and 708 were treated (353 with EVG/COBI/FTC/TDF and 355 with ATV/RTV+FTC/TDF). EVG/COBI/FTC/TDF was non-inferior to ATV/RTV+FTC/TDF for the primary outcome (316 patients [89·5%] vs 308 patients [86·8%], adjusted difference 3·0%, 95% CI -1·9% to 7·8%). Both regimens had favourable safety and tolerability; 13 (3·7%) versus 18 (5·1%) patients discontinued treatment because of adverse events. Fewer patients receiving EVG/COBI/FTC/TDF had abnormal results in liver function tests than did those receiving ATV/RTV+FTC/TDF and had smaller median increases in fasting triglyceride concentration (90 μmol/L vs 260 μmol/L, p=0·006). Small median increases in serum creatinine concentration with accompanying decreases in estimated glomerular filtration rate occurred in both study groups by week 2; they generally stabilised by week 8 and did not change up to week 48 (median change 11 μmol/L vs 7 μmol/L).
INTERPRETATION
If regulatory approval is given, EVG/COBI/FTC/TDF would be the first integrase-inhibitor-based regimen given once daily and the only one formulated as a single tablet for initial HIV treatment.
FUNDING
Gilead Sciences.
BACKGROUND
Several studies have reported on an association between hepatitis C virus (HCV) antibody status and the development of chronic kidney disease (CKD), but the role of HCV viremia and genotype are not well defined.
METHODS
Patients with at least three serum creatinine measurements after 1 January 2004 and known HCV antibody status were included. Baseline was defined as the first eligible estimated glomerular filtration rate (eGFR) (Cockcroft-Gault equation), and CKD was either a confirmed (>3 months apart) eGFR of 60 ml/min per 1.73 m or less for patients with a baseline eGFR more than 60 ml/min per 1.73 m or a confirmed 25% decline in eGFR for patients with a baseline eGFR of 60 ml/min per 1.73 m or less. Incidence rates of CKD were compared between HCV groups (anti-HCV-negative, anti-HCV-positive with or without viremia) using Poisson regression.
RESULTS
Of 8235 patients with known anti-HCV status, 2052 (24.9%) were anti-HCV-positive of whom 983 (47.9%) were HCV-RNA-positive, 193 (9.4%) HCV-RNA-negative and 876 (42.7%) had unknown HCV-RNA. At baseline, the median eGFR was 97.6 (interquartile range 83.8-113.0) ml/min per 1.73 m. During 36123 person-years of follow-up (PYFU), 495 patients progressed to CKD (6.0%) with an incidence rate of 14.5 per 1000 PYFU (95% confidence interval 12.5-14.9). In a multivariate Poisson model, patients who were anti-HCV-positive with HCV viremia had a higher incidence rate of CKD, whereas patients with cleared HCV infection had a similar incidence rate of CKD compared with anti-HCV-negative patients. There was no association between CKD and HCV genotype.
CONCLUSION
Compared with HIV-monoinfected patients, HIV-positive patients with chronic rather than cleared HCV infection were at increased risk of developing CKD, suggesting a contribution from active HCV infection toward the pathogenesis of CKD.
CONTEXT
Therapies to decrease immune activation might be of benefit in slowing HIV disease progression.
OBJECTIVE
To determine whether hydroxychloroquine decreases immune activation and slows CD4 cell decline.
DESIGN, SETTING, AND PATIENTS
Randomized, double-blind, placebo-controlled trial performed at 10 HIV outpatient clinics in the United Kingdom between June 2008 and February 2011. The 83 patients enrolled had asymptomatic HIV infection, were not taking antiretroviral therapy, and had CD4 cell counts greater than 400 cells/μL.
INTERVENTION
Hydroxychloroquine, 400 mg, or matching placebo once daily for 48 weeks.
MAIN OUTCOME MEASURES
The primary outcome measure was change in the proportion of activated CD8 cells (measured by the expression of CD38 and HLA-DR surface markers), with CD4 cell count and HIV viral load as secondary outcomes. Analysis was by intention to treat using mixed linear models.
RESULTS
There was no significant difference in CD8 cell activation between the 2 groups (-4.8% and -4.2% in the hydroxychloroquine and placebo groups, respectively, at week 48; difference, -0.6%; 95% CI, -4.8% to 3.6%; P = .80). Decline in CD4 cell count was greater in the hydroxychloroquine than placebo group (-85 cells/μL vs -23 cells/μL at week 48; difference, -62 cells/μL; 95% CI, -115 to -8; P = .03). Viral load increased in the hydroxychloroquine group compared with placebo (0.61 log10 copies/mL vs 0.23 log10 copies/mL at week 48; difference, 0.38 log10 copies/mL; 95% CI, 0.13 to 0.63; P = .003). Antiretroviral therapy was started in 9 patients in the hydroxychloroquine group and 1 in the placebo group. Trial medication was well tolerated, but more patients reported influenza-like illness in the hydroxychloroquine group compared with the placebo group (29% vs 10%; P = .03).
CONCLUSION
Among HIV-infected patients not taking antiretroviral therapy, the use of hydroxychloroquine compared with placebo did not reduce CD8 cell activation but did result in a greater decline in CD4 cell count and increased viral replication.
TRIAL REGISTRATION
isrctn.org Identifier: ISRCTN30019040.
OBJECTIVES
To characterize persons undergoing HIV genotypic resistance testing (GRT) while treatment naive and to estimate the prevalence of transmitted HIV drug resistance (TDR) among HIV-positive outpatients in Ontario, Canada.
METHODS
We analysed data from a multi-site cohort of persons receiving HIV care. Data were obtained from medical chart abstractions, interviews and record linkage with the Public Health Laboratories, Public Health Ontario. The analysis was restricted to 626 treatment-naive persons diagnosed in 2002-09. TDR mutations were identified using the calibrated population resistance tool. We used descriptive statistics and regression methods to characterize treatment-naive GRT test uptake and patterns of TDR.
RESULTS
Overall, 53.2% (333/626) of participants had baseline GRT. The proportion increased with year of HIV diagnosis from 30.0% in 2002 to 82.6% in 2009 (P < 0.0001). Among those tested, 13.6% (CI 9.9-17.3%) had one or more drug resistance mutations, and 8.8% (CI 5.7-11.8%), 4.8% (CI 2.5-7.2%) and 2.7% (CI 1.0-4.5%) had mutations conferring resistance to nucleoside/tide reverse transcriptase inhibitors (NRTIs), non-nucleoside reverse transcriptase inhibitors (NNRTIs) and protease inhibitors (PIs), respectively. TDR prevalence increased from 2002-07 to 2008-09 (adjusted OR 3.7, 95% CI 1.7-8.2), driven by a higher proportion with NRTI (18.2% versus 5.9%, P = 0.0009) and NNRTI mutations (11.7% versus 2.8%, P = 0.004) in the later time period. PI TDR remained unchanged.
CONCLUSIONS
Baseline GRT increased dramatically since 2002, but remains below 100%. The prevalence of overall TDR tripled due to increases in NRTI and NNRTI mutations. These findings highlight the value of routine baseline GRT for TDR surveillance and patient care.
The overall purpose of these guidelines is to provide guidance on best clinical practice in the treatment and management of adults with HIV infection with antiretroviral therapy (ART). The scope includes: (i) guidance on the initiation of ART in those previously naïve to therapy; (ii)support of patients on treatment; (iii) management of patients experiencing virological failure; and (iv) recommendations in specific patient populations where other factors need to be taken into consideration. The guidelines are aimed at clinical professionals directly involved with and responsible for the care of adults with HIV infection and at community advocates responsible for promoting the best interests and care of HIV-positive adults. They should be read in conjunction with other published BHIVA guidelines.
OBJECTIVES
The magnitude of HIV viral rebound following ART cessation has consequences for clinical outcome and onward transmission. We compared plasma viral load (pVL) rebound after stopping ART initiated in primary (PHI) and chronic HIV infection (CHI).
DESIGN
Two populations with protocol-indicated ART cessation from SPARTAC (PHI, n = 182) and SMART (CHI, n = 1450) trials.
METHODS
Time for pVL to reach pre-ART levels after stopping ART was assessed in PHI using survival analysis. Differences in pVL between PHI and CHI populations 4 weeks after stopping ART were examined using linear and logistic regression. Differences in pVL slopes up to 48 weeks were examined using linear mixed models and viral burden was estimated through a time-averaged area-under-pVL curve. CHI participants were categorised by nadir CD4 at ART stop.
RESULTS
Of 171 PHI participants, 71 (41.5%) rebounded to pre-ART pVL levels, at a median of 50 (95% CI 48-51) weeks after stopping ART. Four weeks after stopping treatment, although the proportion with pVL ≥ 400 copies/ml was similar (78% PHI versus 79% CHI), levels were 0.45 (95% CI 0.26-0.64) log(10) copies/ml lower for PHI versus CHI, and remained lower up to 48 weeks. Lower CD4 nadir in CHI was associated with higher pVL after ART stop. Rebound for CHI participants with CD4 nadir >500 cells/mm(3) was comparable to that experienced by PHI participants.
CONCLUSIONS
Stopping ART initiated in PHI and CHI was associated with viral rebound to levels conferring increased transmission risk, although the level of rebound was significantly lower and sustained in PHI compared to CHI.
BACKGROUND
Estimates of treatment failure, change and interruption are lacking for individuals treated in early HIV infection.
METHODS
Using CASCADE data, we compared the effect of treatment in early infection (within 12 months of seroconversion) with that seen in chronic infection on risk of virological failure, change and interruption. Failure was defined as two subsequent measures of HIV RNA>1,000 copies/ml following suppression (<500 copies/ml), or >500 copies/ml 6 months following initiation. Treatment change and interruption were defined as modification or interruption lasting >1 week. In multivariable competing risks proportional subdistribution hazards models, we adjusted for combination antiretroviral therapy (cART) class, sex, risk group, age, CD4(+) T-cell count, HIV RNA and calendar period at treatment initiation.
RESULTS
Of 1,627 individuals initiating cART early (median 1.8 months from seroconversion), 159, 395 and 692 failed, changed and interrupted therapy, respectively. For 2,710 individuals initiating cART in chronic infection (median 35.9 months from seroconversion), the corresponding values were 266, 569 and 597. Adjusted hazard ratios (HRs; 95% CIs) for treatment failure and change were similar between the two treatment groups (0.93 [0.72, 1.20] and 1.06 [0.91, 1.24], respectively). There was an increasing trend in rates of interruption over calendar time for those treated early, and a decreasing trend for those starting treatment in chronic infection. Consequently, compared with chronic infection, treatment interruption was similar for early starters in the early cART period, but the relative hazard increased over calendar time (1.54 [1.33, 1.79] in 2000).
CONCLUSIONS
Individuals initiating treatment in early HIV infection are more likely to interrupt treatment than those initiating later. However, rates of failure and treatment change were similar between the two groups.
BACKGROUND
Differences in access to care and treatment have been reported in Eastern Europe, a region with one of the fastest growing HIV epidemics, compared to the rest of Europe. This analysis aimed to establish whether there are regional differences in the mortality rate of HIV-positive individuals across Europe, and Argentina.
METHODS
13,310 individuals under follow-up were included in the analysis. Poisson regression investigated factors associated with the risk of death.
FINDINGS
During 82,212 person years of follow-up (PYFU) 1,147 individuals died (mortality rate 14.0 per 1,000 PYFU (95% confidence interval [CI] 13.1-14.8). Significant differences between regions were seen in the rate of all-cause, AIDS and non-AIDS related mortality (global p<0.0001 for all three endpoints). Compared to South Europe, after adjusting for baseline demographics, laboratory measurements and treatment, a higher rate of AIDS related mortality was observed in East Europe (IRR 2.90, 95%CI 1.97-4.28, p<.0001), and a higher rate of non-AIDS related mortality in North Europe (IRR 1.51, 95%CI 1.24-1.82, p<.0001). The differences observed in North Europe decreased over calendar-time, in 2009-2011, the higher rate of non-AIDS related mortality was no longer significantly different to South Europe (IRR 1.07, 95%CI 0.66-1.75, p = 0.77). However, in 2009-2011, there remained a higher rate of AIDS-related mortality (IRR 2.41, 95%CI 1.11-5.25, p = 0.02) in East Europe compared to South Europe in adjusted analysis.
INTERPRETATIONS
There are significant differences in the rate of all-cause mortality among HIV-positive individuals across different regions of Europe and Argentina. Individuals in Eastern Europe had an increased risk of mortality from AIDS related causes and individuals in North Europe had the highest rate of non-AIDS related mortality. These findings are important for understanding and reviewing HIV treatment strategies and policies across the European region.
To assess the ability of three genitourinary medical centres to clinically identify primary HIV infection (PHI). Cases of recently acquired HIV infection, identified using the Health Protection Agency (HPA) avidity assay on all HIV diagnoses from January to August 2009, were investigated by case-note review. Sixty-four individuals were identified as PHI using the HPA avidity assay. Of 64 individuals, 31 (48%) were identified clinically. Imperial College identified 8/26 (31%), Guys and St Thomas' 15/27 (56%) and Brighton 8/11 (73%). Clinical suspicion of PHI was associated with reported unprotected anal intercourse (P = 0.017), seroconversion symptoms (P = 0.0004), a negative HIV test within six months (P = 0.024) and avidity assay result availability (P = 0.0169). Seventy percent of PHI cases missed had a documented risk factor. Thirty-five percent of those clinically identified with PHI were documented as informed of the associated enhanced infectivity. Suspicion of PHI was low despite documented risk factors and recent HIV-negative antibody tests. Counselling to prevent onward transmission was suboptimal.
BACKGROUND
HIV cohort collaborations, which pool data from diverse patient cohorts, have provided key insights into outcomes of antiretroviral therapy (ART). However, the extent of, and reasons for, between-cohort heterogeneity in rates of AIDS and mortality are unclear.
METHODS
We obtained data on adult HIV-positive patients who started ART from 1998 without a previous AIDS diagnosis from 17 cohorts in North America and Europe. Patients were followed up from 1 month to 2 years after starting ART. We examined between-cohort heterogeneity in crude and adjusted (age, sex, HIV transmission risk, year, CD4 count and HIV-1 RNA at start of ART) rates of AIDS and mortality using random-effects meta-analysis and meta-regression.
RESULTS
During 61 520 person-years, 754/38 706 (1.9%) patients died and 1890 (4.9%) progressed to AIDS. Between-cohort variance in mortality rates was reduced from 0.84 to 0.24 (0.73 to 0.28 for AIDS rates) after adjustment for patient characteristics. Adjusted mortality rates were inversely associated with cohorts' estimated completeness of death ascertainment [excellent: 96-100%, good: 90-95%, average: 75-89%; mortality rate ratio 0.66 (95% confidence interval 0.46-0.94) per category]. Mortality rate ratios comparing Europe with North America were 0.42 (0.31-0.57) before and 0.47 (0.30-0.73) after adjusting for completeness of ascertainment.
CONCLUSIONS
Heterogeneity between settings in outcomes of HIV treatment has implications for collaborative analyses, policy and clinical care. Estimated mortality rates may require adjustment for completeness of ascertainment. Higher mortality rate in North American, compared with European, cohorts was not fully explained by completeness of ascertainment and may be because of the inclusion of more socially marginalized patients with higher mortality risk.
BACKGROUND
The aim of this study was to determine whether there is a protective effect of combination antiretroviral therapy (cART) on the development of clinical events in patients with ongoing severe immunosuppression.
METHODS
A total of 3,780 patients from the EuroSIDA study under follow-up after 2001 with a current CD4(+) T-cell count ≤200 cells/mm(3) were stratified into five groups: group 1, viral load (VL)<50 copies/ml on cART; group 2, VL 50-99,999 copies/ml on cART; group 3, VL 50-99,999 copies/ml off cART; group 4, VL≥100,000 copies/ml on cART; and group 5, VL≥100,000 copies/ml off cART. Poisson regression was used to identify the risk of (non-fatal or fatal) AIDS- and non-AIDS-related events considered together (AIDS/non-AIDS) or separately as AIDS or non-AIDS events within each group.
RESULTS
There were 428 AIDS/non-AIDS events during 3,780 person-years of follow-up. Compared with group 1, those in group 2 had a similar incidence of AIDS/non-AIDS events (incidence rate ratio [IRR] 1.04; 95% CI 0.79-1.36). Groups 3, 4 and 5 had significantly higher incidence rates of AIDS/non-AIDS events compared with group 1; incidence rates increased from group 3 (IRR 1.78; 95% CI 1.25-2.55) to group 5 (IRR 2.36; 95% CI 1.66-3.40), demonstrating the increased incidence of AIDS/non-AIDS events associated with increasing viraemia. After adjustment, the use of cART was associated with a 40% reduction in the incidence of AIDS/non-AIDS events in patients with VL 50-99,999 copies/ml (IRR 0.59; 95% CI 0.41-0.85) and in those with a VL>100,000 copies/ml (IRR 0.66; 95% CI 0.44-1.00). Similar relationships were seen for non-AIDS events and AIDS events when considered separately.
CONCLUSIONS
In patients with ongoing severe immunosuppression, cART was associated with significant clinical benefits in patients with suboptimal virological control or virological failure.
This study focuses on the role of corporate political resources in shaping firm strategy in an extension of resource dependence theory. Using a sample of 92 regional banks for the period 1997-2004 in Japan, this paper explores the effect of amakudari (translated as the appointment of former bureaucrats to the boards of directors of private organizations) on the performance and managerial risk-taking of banks. We found that amakudari networks have a negative impact on a bank’s profitability in the post-bubble crisis era when firm-specific and other variables are controlled. In addition, the evidence shows that a bank appointing more ex-bureaucrats to its board of directors has a tendency to get involved in more risky lending activities. Furthermore, the empirical results of this study are also found to be robust using the Arellano-Bond GMM estimator.